Knowledge and understanding
The purpose of the course is to understand how to realize advanced applications (Apps) for mobile devices. In addition to specific knowledge of the different programming languages (C++, OpenCV and Java) also knowledge about Android operating system will be acquired.
Applying knowledge and understanding
The student will acquire the ability to develop complex applications for mobile devices.
It is highly desirable (although not compulsory) to have taken the course of Mobile Device Programming during the Bachelor studies.
Course contents summary
Brief re-cap about mobile platforms, mobile device programming and problems related to it. Details about Android platform and Android SDK.
Development of graphical interfaces in Android through widgets. Sensor accessibility with Android SDK. Good practices in App development.
Android NDK (Native Development Kit), JNI (Java Native Interface) and Android Studio IDE.
Examples of potential advanced applications.
Introduction to mobile vision (computer vision applied to mobile devices).
OpenCV library and its interfacing with Android: core module and Mat container; imgproc module (2D image filters, binary morphology, edge detection, template matching, etc.); interface creation; feature extraction (Shi-Tomasi, SIFT, SURF, ORB, etc.); object detection (cascade classifiers, etc.).
Examples of OpenCV projects on mobile devices.
Brief introduction to other mobile applications. Usage of sensors Cenni (accelerometers, gyroscope, ...) for advanced applications.
The course includes about 42 hours of lessons on the following topics (duration is indicative):
- course introduction; introduction to the mobile vision and its applications; introduction to Android; development tools; emulator; GenyMotion emulator (4 hours)
- first OpenCV example; representation of digital images; color spaces; Mat class in OpenCV; methods of the Mat class; (3 hours)
- loading an image from a file; image histogram; visualization of the image histogram; histogram equalization (5 hours)
- spatial filters: smoothing, order-statistics, sharpening; first-order filters; implementation of spatial filters; edge detection; implementation of Sobel and Canny; Implementation of Canny on live images; (4 hours)
- Hough transform and its implementation of lines; widget SeekBar; management of click and long click; GBHT; Hough transform for circles (2 hours)
- geometrical transformations; GeometryCorrection example; touch event (2 hours)
- feature detection and matching; Harris; FAST detector; ORB detector; feature description; BRIEF, ORF, BRISK and FREAK descriptors; implementation of image matching; invariance of the operators (6 hours)
- competition and project presentation (2 hours)
- SIFT; SURF; First NDK example; Check prime number in NDK; Interface NDK-OpenCV - Harris, FAST and ORB in NDK; image matching and stitching in NDK; ricompiling C++ source code for NDK Android; SIFT, SURF, FREAK and BRIEF (6 hours)
- sensor access in Android; accelerometer, compass and environmental sensors; Bluetooth; GPS; (3 hours)
- classifiers; Cascade classifiers (3 hours)
- image segmentation (2 hours)
Course slides and online material.
The course will consist in about 40 hours of frontal lectures (with practical examples of programming).
The course will be in blended mode in the room and simultaneous live streaming. The lesson will be also video- and audio-recorded and made available on Elly after its finish.
Assessment methods and criteria
The exam consists in the development of an advanced App to be developed alone. The topic of the App will be decided with the teacher.