In the computational era of everything, imaging has not become an exception. Computational algorithms allow both to extract valuable information from a scene and to improve the very sensor that forms the image. Today, computational and image processing enhancements became integrable parts of any digital imager, be it a miniature smartphone camera or a complex space telescope.
This crash course is designed as a prerequisite for those students who would like to venture into the field of Computer Vision. We will cover foundational mathematical equations that are involved in the image formation and in the geometric projection principles. The concept of Point Spread Function that distorts the object will be explained on particular examples and will be experimented with for the tasks of image reconstruction and denoising.
Image processing will be covered with an emphasis on the Python libraries to be used in the rest of the imaging-related courses on the DS/IST tracks (openCV and others). A basic DSLR photo camera will be considered as a model for understanding Fourier Imaging and Filtering methods in a short laboratory exercise. Hands-on tutorial on how to select a camera and a lens for your machine vision application will be provided.
The theory of color and stereo light-field cameras will be covered using the models of commonplace Bayern RGB sensors; as well as state-of-art spectral and multi-lens imagers. An overview of practical regularization recipes for these cameras will be given.
The course will consist of three theoretical lectures riffled by three graded in-class laboratory coding sessions on the subjects covered in the theoretical lectures. 100% attendance is mandatory. There will be a single in-class exam during the evaluation week and no homeworks.