We are interested in enhancing human-computer interaction in applications of pattern recognition where higher accuracy is required than is currently achievable by automated systems, but where there is enough time for a limited amount of human interaction. This topic has so far received only limited attention from the research community.
This project is a continuation of earlier work.
Phase 1: Training data creation.
New training data will be obtained using a customer-provided, Android-based tool. A new database of flower images will be created using high-resolution pictures taken with the camera in an Android phone (6+ Mega Pixel, typically sized 2-3 MB each). There are already 30+ species with 5 pictures each that can be used. Students should collect at least 10 more species to learn how to use this new tool. Experience using the tool and related feedback will be appreciated by the customer.
Phase 2: Flower recognition test.
After the photo data is available, the team will run an experiment with test data consisting of a random selection of flower photos. The team will then enter the test photos using a new Android-based flower recognition tool (proposed IVS improvement). We will then collect the results. Classification uses knn algorithm. Accuracy and time to complete will be recorded. All data (images, human interaction results, etc) from the android app can be stored to Google cloud with one click (accessible by the researchers only at this time, but could open to wider audience in the future). The results will be compared to data collected on previous projects.
Phase 3: Re-run test with two different color spaces.
The existing tests used the RGB color space. We will add columns for HSI and CIELab to the training data. We will then compare accuracy (using data collected in Phase 2) with classification performed using HSI, then another comparison using CIELab. The best result will be used as the final fix/delivery of the 2nd android app.