CSA Night At The Museum
Goals For Second Trimester
- Despite not being physically there, I was able to absorb a ton of feedback on our project during my 40 minutes on facetime from my peers and parents. Based on what I saw, one thing we may want to improve upon is increasing the resolution of the images from 2828 to something larger, because although the 2828 images are faster to train and test, they can be inaccurate because of the multitude of signs and differing enviorments. Additionally we want to look at adding more signs like words or longer phrases. While doing some research on the topic I was able to find this video which shows an AI that performs better and uses more phrases than we currently have avaliable while using a higher resolution in order to increase accuracy especially with many similar signs.
- Additionally I want to implement generalization into our code in order to make it so that users don’t have to perfectly position their hand within the box in order to get a accurate result. While observing and presenting at Night at the Museum on facetime, I noticed many of my peers and parents were having difficulties orientating their hand within the webcam box to be similar enough to the training dataset to work. With generalization, we are essentially creating new images to train on there for, no matter where you put or orientate your hand, it will always get it right because its trained on all orientations and positions of the training images of the hand.
Images from Presentation
Blog on Me and My Team
Reflection on Night at Museum Presentation:
I think Night at the Museum went great. Seeing people super interested in the same thing that im super passionate about was super interesting to see. I do wish I was able to show up in person cause I would’ve loved to have some more in depth conversations with some of the more “techy” people. However despite this, I still was able to have some really good conversations about the AI with some really interesting people. I also asked my dad to walk around the room and show me some of the other projects like Ethan Zhou’s stock AI which I found super interesting because even though its also an AI they use a totally different algorithem. I love events like these, it reminds me of the NSF presentation I went to a few months ago, its a room filled with people with the same interests as you that are open to having deeper discussions and learning with you. Even though both times I’ve been presenting, I would argue that the discussions that take place within these places are worth more than the presentation itself and I’m super glad that I was still able to have some of those despite not being there in person.
Many of the earlier people we presented to were CSA students which would in my opinion be considered “techy” so those earlier conversations went a lot more in depth than the later ones. Although we did encounter some issues with the code predicting “M” “Q” and “J” more than others, however this issue most likely stems from low resolution, because for normal mnist data which is 28*28 handrawn images 0-9 which is a lot easier to determine than 24 signs that sometimes look identical, so in order to prevent this we should increase the resolution of the sign language dataset. However with doing this the orientation and position of the hand has to be more specific than before, so we could implement generalization or create a box on the screen that the user puts their hand in when they want it to be tested on which would allow the code to always know where their hand is located or make it so it doesn’t matter where its located. With these changes in mind, I expect the model to be far more accurate than currently.
Other Work at Night At the Museum
Here is some other work not in CSA that I found super cool. My sister was always more into art than me, I took up computer science (the better one) but I still find artwork super interesting, and seeing what they were able to do as highschoolers is insanley impressive.