Stephanie's Updates


Week 1 (2/5/2018 - 2/11/2018)

It begins

This week's activities revolved around meetings and tutorial work. I attended our group meeting with the Lexmark customers, who described the project in more detail, outlined what portions of it we will be addressing over this semester, and showed us examples of similar image-processing based projects for reference. Later on, my teammates and I met again to discuss how to best approach the tasks we were assigned.
To familiarize myself with both the software we'll be using and the general concepts behind our project, I worked on the object recognition tutorial that was provided to the group by one of the Lexmark customers. I'm also trying to learn more about neural nets through introductory lectures on YouTube.

Week 2 (2/12/2018 - 2/18/2018)

We have liftoff!

Because I was one of the three people who were using Windows to complete the object recognition tutorial, I had a few extra issues to overcome at the beginning of this week. Even with trying various shells, attempts to translate the tutorial's Linux commands into something Windows-friendly, and much re-downloading of Python versions and other necessary files, I was able to get only halfway through the tutorial at first. However, thanks to my teammates' suggestions, I finally got the tutorial to work using the web browser based shell in my Google Cloud Platform account!
Once everyone was caught up with the unexpectedly difficult tutorial process, the group met to discuss more specifics. We are now at a stage where we can start developing a dataset for initial training. Specifically, we decided to start working on four individual cards: the queen of hearts, the jack of hearts, the king of hearts, and the ace of diamonds. Each person in the group will provide at least 60 annotated (with labelImg) images of each of these cards, which will be collected in a Google Drive folder so that we can more easily pool them into a shared Google Cloud bucket. This way, we get the maximum usage out of our Google Cloud credits and the training proper can all be done cohesively on one machine even though the image annotation work will be split evenly among the group members.
As far as personal progress in this regard goes, I have annotated my 60 images for each of these cards. Here's hoping the upcoming training will go well!

Week 3 (2/19/2018 - 2/25/2018)

A bit of cleanup

To fix memory issues we were encountering during training, this week was devoted to resizing the original overly large images that were taken of the four cards from last week. Additionally, since it was already necessary to re-annotate the smaller images, this time we made sure to normalize how everyone annotated the card names. I first reduced my images to 240x320 pixels from their original size of 3024x4032 pixels, then changed my annotations so that the card names are now written as "AceOfDiamonds", "KingOfHearts", and so on. No new cards were assigned this week so that we can be sure our current method works before taking and annotating many more images.
We also have our Presentation U practice set up for Monday, 2/26/2018, and our actual midterm presentation coming up soon, so the group set aside some time to rehearse everyone's parts.

Week 4 (2/26/2018 - 3/4/2018)

Game-changing

After discussing the direction of our project a little more with our customers -- and given the amount of time we have left in the semester -- we've decided to work with a smaller pool of cards so we can make an interactive web interface and get a little more programming in for the project. Since we wouldn't be able to make a complete library of poker hands without the full deck, Chelsea brought up a different game called Euchre, which only uses 9-A from each suit. This will allow us to have a more complete end product as far as the website goes, and gives us a more solid direction to take with our choices for the cards we will be annotating.
This Monday, we did our practice run of the midterm presentation at Presentation U and got very nice positive feedback from the employee who worked with us. Our usage of images throughout the slides went over well and enhanced the points each of us was trying to verbally get across. We were also told that our descriptions were detailed enough that a non-CS-major audience would easily be able to understand our presentation.
We've achieved great progress so far in training the neural net! It now seems to be getting most of its guesses right (at least for the current library of images). Since the smaller images are already proving to be working much better, four more cards were assigned for the week: Ace of Hearts, Jack of Diamonds, King of Diamonds, and Queen of Diamonds. I have finished my share of these pictures and annotations and added them to our training library.

Weeks 5 and 6 (3/5/2018 - 3/18/2018)

Halfway there

A lot of our focus during the work week of 3/5/2018 - 3/9/2018 was devoted to polishing up our midterm presentation for that Friday, 3/9/2018. We first followed the advice from our Presentation U practice run, modifying the wordier slides (requirements and schedule/milestones) so that they matched the more visual, easy to follow format of the other slides. From there, most of the slides were adjusted further from the version we used at Presentation U, whether because of issues with flow of the presentation or because of the change in direction of our project since we had first drafted the slides. The ones that arguably required the most modification were, again, requirements and summary/schedule/milestones, as these needed to be almost completely overhauled for both the presentation and the website.
Once we had established a more cohesive and up-to-date version of our original presentation, we did final runs of our transitions, timing, and what each of us should and shouldn't address in our sections of the presentation. Since I was handling requirements and Chelsea was handling summary/schedule/milestones, we made sure that the points we were making matched up but weren't redundant.

Due to the extra time spring break provides, we assigned eight cards to be photographed and annotated for the training library this time: the Ace, Jack, King, and Queen of the Clubs and Spades suits. This leaves us with only the 9 and 10 of all four suits to add to the library for our full Euchre card deck, so this is great progress! We will be starting on the website part of the project soon, so I've recently been looking through some Django and Flask overviews since I have no former experience with either.

Week 7 (3/19/2018 - 3/25/2018)

Finally done with annotations!

This past week, all of us made the push to finish the last eight cards required for our Euchre library. I used some of my spring break time to take pictures of all of the 9s and 10s in advance, so I was able to focus on knocking out the annotations for everything I had left. It's very nice to have this more monotonous work out of the way! We will now have a fully trained model, and if its performance is high enough (our previous training iterations' trend in precision indicates that it likely will be), we can proceed to the next step of creating another smaller library of more complex images for the test set, since it is not good practice for neural nets to test on training set data. This will be a part of our formal testing for the project as well.
We are now beginning the more exciting phase of our project: actually building the website application! Our customers recommended that we focus on this now so that we would have both a more polished end product and an easier time with testing. Chelsea and Rupal were already able to lay out excellent ground work for the server and client sides of the website, so in the next few weeks I and my other teammates will be working to connect them and to add a little more to the interface for ease of use.

Week 8 (3/26/2018 - 4/1/2018)

Web application work

This week, I focused on developing the front-end of our web application a little more, specifically the webpage that is accessed when the user submits their image. My goal was to realize some of the expectations we had set for the interface -- both visually and in the way it would process the card name + percent certainty data passed in to it by a user-submitted image. With that accomplished, my hope is that it will now be easier to connect the front-end and back-end together over these last few weeks.
There are still a couple of bugs and incomplete sections of code to work out before I fully connect it to the main website that Rupal drew up. So for now, my webpage is only available in the submitPage folder of our git repository, and not in the dropdown menu with the more finished sections of our code. I have more than enough material to provide a brief overview of it here, though:

The user will first be taken to a loading screen while the image is given to our trained neural net for analysis (since the 'net is not yet integrated with the website, I arbitrarily chose to have the loading screen run for a few seconds). A still shot of the loading screen is pictured below; both the loading wheel and the ellipsis after "Thinking" are animated in practice.
SubmitPageScreenshot1
The user is then presented with a "results" page. Because I didn't have real values passed in for the card names or percent certainties, I wrote code that generates 5 card names and 5 associated percent certainties at random, for the sake of having a finished proof of concept. Our real percent certainties are much higher than this!
SubmitPageScreenshot2
If one of the card names happens to be incorrect, the user can click the associated "Am I wrong?" button. This enables the drop down menu and allows the user to select the correct card name without the risk of mistyping it.
SubmitPageScreenshot3

SubmitPageScreenshot4
In the final version of this webpage, the [insert helpful advice here] placeholder will be replaced by actual tips for the user based on the hand they have, if providing tips in this format proves to work well.

Week 9 (4/2/2018 - 4/8/2018)

Approaching the end!

Since this week was our testing review and next week will be our coding review, a lot of the group's focus this week was on getting together organized documentation for both of these meetings. Our testing review was on Friday, and there were a few fixes suggested for our Testing Plan webpage which have now been implemented: less ambiguous phrasing in certain test cases (listing the specific expected return values when the model or web app are given invalid data), references regarding how Euchre is played for whoever works on the project in the future, and images to reinforce descriptions made in certain test cases (for instance, how much of a card has to be obscured for it to be considered "truncated"?). I also embedded a Google Drive spreadsheet at the bottom of the page which will contain information about which tests passed and failed, another item requested in our testing review meeting.
Apart from keeping our documentation webpages up to date, our remaining tasks for the project mostly relate to tying the front and back ends of our web application together. A lot of progress has been made on this front, but there are still a few bugs that haven't been worked out just yet, such as getting the loading page to run simultaneously with the server's result processing as it is intended to do (right now they are running one after the other instead of in parallel, but this may be due to which page the loading animation is tied to; using the Submit button to hide the main page div and unhide the loading animation may fix this issue), or successfully storing user-submitted updates of any card guesses the neural net got wrong (that is, storing the selections from the drop-down menus that I talked about in my last weekly update). However, our other major concern -- issues with our card prediction accuracy -- seems to be close to resolution based on the most recent results achieved by modifying our dataset.

Week 10 (4/9/2018 - 4/15/2018)

Finishing touches...

This week mostly involved seeing how the model was able to do in training accuracy given various adjustments to the dataset, and from there how it performed in testing. Training accuracy is pretty good for the most part! In testing, it does seem that the model has some trouble with identifying all six cards in a multi-card image, though the images given to it so far have been very truncated (the cards stacked on top of each other) and this may have presented an issue. To this end, I added images to our testing set where the cards in the Euchre hand are all side by side in a row or are in a 2x3 "matrix", with no overlapping cards in either case, so hopefully the model will be able to better identify those. In the event that it does, we'll probably put in an example or text suggestion to the user indicating that the neural net can make its best guesses when the cards don't overlap. The model also seems to perform best with the cards at a certain distance away from the camera and on a plain background, which luckily is the format our test images (and most likely the in-game user's images) will take.
We also had our code review. Based on the feedback from that, we will be adding the documentation and comments from our README file into our code. We will also be indicating which parts of the neural net code we had to modify to get it to switch functionality from pet identification to card identification.
Going forward, we have our practice presentation coming up soon, so we will be focusing on that for the most part next week, along with a few final tweaks to the web application and the training/testing of the model.

Week 11 (4/16/2018 - 4/22/2018)

It's a wrap! (Final update)

Our practice presentation in Marksbury was on Friday, so our group met over the week to create our slides and go through a few practice runs of the content we wanted to cover. We got great feedback on the presentation; probably the biggest concern was that the default text color for the slide theme we chose turned out to be a gray color that was not easy to see on either the big screen or the laptop we were projecting from. I went back through and changed the font color to black for all of our slides -- and lightened the background for the very first slide, which had dark text on a fairly dark background -- in order to improve the contrast between the text and background throughout our presentation. Otherwise, our ratio of images and text was reasonable and the information we presented was easily understandable, so it seems that we are pretty well prepared for the real thing next Friday.
The plan for our remaining time before the final presentation is to clean up the last of our documentation and comments, so that we will hopefully be able to give the finished product to our customer and Dr. Piwowarski before finals week. This includes our website (the course one, that is, not the web application), so I have been combing through that and resolving any typos or other issues I find. From there, we only need to finish filling out our test case table, and we should be all set. It's been a great project!