What: Robust clustering on incomplete and erroneous data sets
Who: Dr. Sami Äyrämö, University of Jyväskylä
When: 13.2. klo 15:15
Where:B180

Abstract:

Scalable and robust clustering algorithms are useful tools, for example, in data mining and knowledge discovery applications that often deal with large, incomplete and erroneous data sets.

Based on the well-known K-means clustering, robust clustering methods can be easily derived by replacing the sample mean with a more robust estimator (e.g., coordinatewise or spatial median). Robust estimators are more insensitive to contaminated and outlying values than, for instance, the sample mean. On the other hand, the non-smooth nature of some robust estimates sets special requirements for the numerical solvers. Different formulations and techniques for solving the optimization problem underlying one particular robust estimate -the spatial median - are presented.

Based on the aforementioned components, highly automated (that is the minimal number of user-defined parameters are required) robust clustering method is presented. The method consists of a number of separately developed and tested elements such as initialization, prototype estimation, and missing data strategy. Furthermore, in order to estimate the correct number of clusters, a new proposal of a cluster validity index is presented. Sample applications are also given.

What: Dimensionality reduction for information visualization
Who: Jarkko Venna, Helsinki university of technology
When: 7.3. klo 14:15
Where:106B

Abstract:

Visualizations of similarity relationships between data points are commonly used in exploratory data analysis to gain insight on new data sets. Answers are searched for questions like: Does the data consist of separate groups of points? What is the relationship of the previously known interesting data points to other data points? Which points are similar to the points known to be of interest? Visualizations can be used both to amplify the cognition of the analyst and to help in communicating interesting similarity structures found in the data to other people.

One of the main problems faced in information visualization is that while the data is typically very high-dimensional, the display is limited to only two or at most three dimensions. Thus, for visualization, the dimensionality of the data has to be reduced. In general, it is not possible to preserve all pairwise relationships between data points in the dimensionality reduction process. This has lead to the development of a large number of dimensionality reduction methods that focus on preserving different aspects of the data. Most of these methods were not developed to be visualization methods, which makes it hard to asses their suitability for the task of visualizing similarity structures. This problem is made more severe by the lack of suitable quality measures in the information visualization field.

Recently a new a new visualization task, visual neighbor retrieval was introduced. It has allowed the formulation of information visualization as a visual information retrieval task and has lead to the development of new quality measures for information visualization and new dimensionality redcution methods specifically aimed for this task.

What: Using two line scanning based spectral cameras simultaneously in one measurement process to create wider spectral area from the measured target
Who: Jukka Antikainen
When: 26.4. klo 10:00
Where:2D106B

Abstract:

In this paper, we describe one method how we can combine two different line scanning based spectral cameras, visible and infrared, to one measurement process and how we can automatically combine their different spectral areas to one wide area from 400 nm to 1700 nm. Proposed method has been used in scientific projects and we present two practical applications where this system is used and what benefits we get from it. Main part of this study is that how we can make more effective measurement process by using two line scanning based spectral cameras at the same measurement process and how we can combine those spectral cameras' spectra. We also describe differences between common spectral imaging systems. We also represent one of our produced software algorithm which we can use in measurement process.

What: Retrieval from a spectral image database by reconstructed spectral images and combinations of inner product images. Oral presentation in the 9th International Symposium on Multispectral Colour Science and Application, Taipei, Taiwan, May 30 - June 1, 2007.
Who: Oili Kohonen
When: 15.5. klo 9:15
Where:2D106

Abstract:

A possibility to reduce the dimensionality of the training data in retrieval from a spectral image database by representing spectral images as the combinations of inner product images is examined. Moreover, the number of eigenvectors needed to reconstruct spectral images without significant percentage of classification error is studied. The experiments are performed by using a real spectral image database.

What: Spectral Images and the Retinex Model. Oral presentation in the 9th International Symposium on Multispectral Colour Science and Application, Taipei, Taiwan, May 30 - June 1, 2007.
Who: Tuija Jetsu
When: 24.5. klo 9:15
Where:2D106

Abstract:

Human color vision models have been used as a basis for color image processing. One of the well known models is the Retinex model. The Retinex algorithm has mainly been applied to grayscale or RGB images, which brings discrepancy with real visual system even before the Retinex processing. In this paper we consider different ways of applying Retinex color appearance model to spectral images. We suggest processing of each spectral channel of the image separately. We also consider some other approaches, e.g. converting the spectral images to LMS responses or to different color spaces (XYZ, L*a*b*). We compare the results gained using spectral images as starting point with the results obtained by applying the Retinex to RGB images. In addition, we consider Retinex model in color constancy problem by using spectral images. In this paper The Retinex processing is done using the MATLAB implementation of the Retinex algorithm.

What: Color Mixing and Color Separation of Pigments with Concentration Prediction, Oral presentation in Gjøvik Color Imaging Symposium 2007, June 13-15, 2007.
Who: Pesal Koirala
When: 13.6. klo 9:15
Where:2D106

Abstract:

In this study, we propose a color mixing and color separation method for the pigments painted on plastic surface based on Kubelka-Munk (KM) model. Eleven different pigments with seven different concentrations have been used as training set. The amount of concentration of each pigment in the mixture is estimated from the training set by using the least-square pseudo-inverse calculation. The result depends on the number and type of pigments selected for calculation. At most we can select all pigments. The combinations resulted with negative concentrations or unusual high concentrations are discarded from the list of candidate combination. The optimal pigment's set and its concentrations are estimated by minimizing the reflectance difference of given reflectance and predicted reflectance.

What: Lossy Compression of Map Images
Who: Alexey Podlasov
When: 21.6. klo 14:15
Where:2D106

Abstract:

An algorithm for lossy compression of scanned map images is proposed. The algorithm is based on color quantization, efficient statistical context tree modeling and arithmetic coding. The rate-distortion performance is evaluated on a set of scanned maps and compared to JPEG2000 lossy compression algorithm, and to ECW, which is a commercially available solution for compression of satellite and aerial images. The proposed algorithm outperforms these competitors in rate-distortion sense for the most part of the operational rate-distortion curve.

What: Hardware Acceleration of AdaBoost in Pattern Detection, Image Processing in Retina and Search in spoken data
Who: Adam Herout, Michal Seeman and Igor Szoke from Brno University of Technology
When: 4.9. klo 12:00
Where:B181

Abstract:

Adam Herout: Hardware Acceleration of AdaBoost in Pattern Detection
The Brno team works intensively on the topic of hardware acceleration of various image-processing and computer visition techniques, AdaBoost seems to be a promising one, even for the class of hardware that we tend to work with (FPGAs, DSPs). The presentation will summarize the AdaBoost algorithm in the context of pattern recognition, and will give some details about the recent acceleration achievements. Also some future visions and anticipations will be given. See more information from

Michal Seeman: Image Processing in Retina
Retina is a very thin layer in our eyes, mostly known as the light sensitive part. But retina also controls significant part of image processing, dynamic range compression, light amount adapting and other complex behaviour of our sight. I will try to illustrate some of these functions in a computer graphics view.

Igor Szoke: Search in spoken data
The talk will be aimed to description of a search system in speech data. The first part will be about speech recognition. Acoustic signal parameterization and hidden Markov model based recognition technique will be briefly described. The second part will deal with indexing of the speech recognizer output. Indexed speech is finally searched by a search module to allow fast search time.

What: Cone Ratio in Color Vision Models
Who: Tuija Jetsu
When: 7.9. klo 14:15
Where:B181

Abstract:

The ratio of three different cone types changes in the human eye between individuals and different regions of the retina. We have studied how these changes would affect color vision models. In this paper, the basis of our analysis is in a Multi-Stage Color Model by de Valois & de Valois (1993), which is one of the well-known color vision models. We present how changes in cone ratio affect the different stages of this model. Previously, color vision models have mainly been studied using 3-dimensional color spaces. We use a spectral approach in model evaluation, which gives us more versatile possibilities for studying the properties of the model.
What: Developing Sustainable Language Technology for African Languages
Who: Professor Arvi Hurskainen
When: 11.9. klo 11:00
Where:2D106B

Abstract:

I will discuss the importance of right choices in developing professional language technology for African languages. The issues to be considered include the following: (1) the production of a single application for a special purpose vs. a module in a larger language management system; (2) systems based on statistical problem solving and learning techniques vs. systems based on language description and linguistic rules; (3) the special requirements of the language type in question; (4) the environments in which the system is intended to be used; (5) the possibility to integrate the system into other applications developed in the field. Finally we should consider how all this can be put together. Should we work with the tools available in public domain or should we license proprietary tools and developing environments? These questions are discussed in the context of the Swahili Language Manager (SALAMA).
What: Mobile learning and "Development"
Who: Associate Professor John Traxler, School of Computing and IT, University of Wolverhampton.
When: 14.9. klo 9:00
Where:2D309

Abstract:

In 2003, the Government of Kenya announce the introduction of Free Primary Education, leading to an increase in primary enrolment of nearly one million. The subsequent fall of the school population pointed to a retention problem aggravated by over-crowding and under-training. A major challenge was to increase the numbers of trained teachers rapidly whilst at the same time improving the quality of the school system and using it as a vehicle for radical social and cultural transformation across issues that included child-marriage and other tribal practices, perceptions of endemic corruption, poor communications, an over-centralisation and widespread adult illiteracy. DFID helped the Ministry in the development of an in-service distance learning programme specifically intended to meet needs for 200,000 primary school teachers.

The SMS component underwent small-scale field trials in early 2006 and larger field trials in late 2006. The system is free to authorised users using a short-code. The messages themselves have a limited and predefined syntax, each type starting with a keyword, and the system has been extended to gather and analyse schools? enrolment data. At the end of the second trials, the technical and organisational achievements of the system are impressive. 12 Districts in 8 Provinces and the Ministry itself were involved and the total number of users was about eight-thousand. About 85% of the registered users were active on the system and over three-thousand participants were female. Users have consumed over a quarter of a million SMS messages to date.

The system is expected to undergo a final formal Evaluation exercise in the coming months intended to explore the relationships between the system?s various quantitative and qualitative, direct and indirect costs and benefits and their impact on the long-term sustainability and use of the system.

A spin-off of the current system making school exam registration and results nationally more accurate, fast and transparent has already become a self-funding service for Kenyan parents.

John Traxler is now working with Biovision, a Swiss organic farming charity, and Avallain, Swiss e-learning specialists, to pilot the integration of SMS similar technologies with multi-media web-based resources to support sustainable organic farming in Kenya and with the Ministry of Gender, Sports, Culture and Social Services, looking at how M-Pesa (SMS-based banking system run by Safaricom in Kenya) can be used alongside the earlier SMS technologies to support disenfranchised youth with microfinance loans and informal literacy training. The seminar will explore these projects and their impact on our understanding of mobile learning, pedagogy, development, capacity, evaluation and intervention.

What: Commercialization of the research
Who: Research Liaison Office
When: 21.9. klo 10:00
Where:2D106

Abstract:

The Järvi-Suomen TULI project will organise information sessions on commercialization in faculties in September. The sessions will last for approximately one hour. Faculty personnel, researchers and postgraduate students are welcome to participate in the sessions. The project is part of the national Tekes-financed TULI programme that promotes business awareness in the field of research. The project is implemented as a joint project of a consortium formed by the universities of Jyväskylä, Kuopio and Joensuu. The project aims to provide the universities with functional mechanisms and practices for commercializing publicly-funded research results and carrying out the related technology transfer.

What: Microscopic Image Quality Prediction for Printing Technology
Who: MSc Masayuki Ukishima, Chiba University, Japan
When: 17.10. klo 09:15
Where:2D106

Abstract:

The multi-primary printing makes it possible to control not only the macroscopic image quality like sprectral reflectance but also the microscopic image quality like granularity. However, it is impossible to predict the microscopic reflectance distribution by coventional methods like the Murray-Davis equation, Yule-Nielsen equation, Neugebauer equation, Clapper-Yule model and so on. The purpose of this researh is to propose the method for predicting the microscopic reflectance distribution based on the optical characteristics of paper and inks.

What: Measuring anisotropy of human visual characteristics for perceptual motion blur evaluation
Who: Shinji Nakagawa, Chiba University, Japan
When: 17.10. klo 09:15
Where:2D106

Abstract:

In recent years, several methods for evaluation or quantification of video image quality have been studied, such as MPRT (Moving Picture Response Time) for quantification of motion blur occurred on hold-type displays. However, MPRT method has some problems to be solved. One of them is that MPRT method does not consider the anisotropy of the display and human visual systems because MPRTs are measured only for horizontal edge scrolls. We have already been able to measure MPRTs for arbitrary directions. Next, we try to measure the anisotropy of human visual characteristics such as motion blur perception. We think that perceived motion blur changes scroll speeds, directions and intensity (color) patterns of moving edges and tracking accuracy of human eyes.
What: Mobile-learning
Who: Adele Botha and Carolina Islas Sedano
When: 25.10. klo 10:15
Where:2D309

Abstract:

Mobile Learning presents unique challenges and opportunities for Education. It has the possibility to extend the boundaries of formal classrooms to anytime, anyplace learning. This workshop aims to present a case for the adoption of mobile learning as an extension to the technology integration curriculum in a school. We address issues from a pupil, educator and manager point of view. The possibilities of using this technology will be demonstrated with hands-on activities and the review of selected mobile learning initiatives.

What: Anisotropic Reflectance from Paper - Measurements, Simulations and Analysis
Who: Per Edströ Digital Printing Center - Mid Sweden University
When: 26.11. klo 14:15
Where:2D106

Abstract:

A very brief overview of the Digital Printing Center (DPC) at Mid Sweden University is given before the scientific contribution described below.

It is investigated experimentally and theoretically how the anisotropy of light reflected from paper depends on the paper absorption and thickness. This is done by measuring the angular resolved reflectance from a series of handsheets containing different amounts of dye and filler and varying in grammage. The theoretical investigation is done by using the angular resolved model DORT2002. Measurements and simulations both show that the anisotropy increases with increased absorption and is higher for lower grammages. The relative amount of light scattered into larger polar angles increases for these cases. It is shown that the range of exact validity of the Kubelka-Munk model is limited to a case where an infinitely thick non-absorbing medium is illuminated diffusely, since this is the only situation where the reflectance is isotropic. It is also shown that the reflectance from what is intuitively thought to be a perfect diffusor strongly depends on the illumination conditions, meaning that a bulk scattering medium that reflects light diffusely independently of the illumination conditions does not exist.

It is investigated how the anisotropy affects d/0 measurements. The DORT2002 model is adapted to the d/0 instrument to allow for inverse calculations starting from d/0 measurement data. This gives access to the objective parameters used in the DORT2002 model through an instrument originally not designed for this purpose. It is shown that this method can explain more than 50 % of the widely investigated anomalous parameter dependence of the Kubelka-Munk model.

The causes of anisotropic reflectance are investigated and it is shown, using analytical methods and the Monte Carlo model Grace, that it depends on the relative contribution from near-surface bulk scattering. The reflectance in larger polar angles is higher from near-surface bulk scattering than it is from scattering deeper inside the medium. Near-surface bulk scattering dominates in strongly absorbing media since the remaining light is absorbed and in optically thin media since the remaining light is transmitted. Obliquely incident illumination causes the light to scatter closer to the surface, and this also causes the relative contribution from near-surface bulk scattering to increase.

The investigation was made by Magnus Neuman, and the Monte Carlo simulations were made by Ludovic Coppel.

What: Measurement of Print Quality and Paper Properties related to Printability using Imaging Techniques
Who: Mattias Andersson Digital Printing Center - Mid Sweden University
When: 26.11. klo 14:15
Where:2D106

Abstract:

The purpose of a printed product is to transmit something of value to the observer, such as information or a feeling. High print quality adds value to a printed product and as long as the product is functional in other aspects, customers are willing to pay for it. Although print quality is something visible to most people, we do not all perceive it in the same way - each observer of a printed product has a personal definition or opinion about what print quality is. Moreover, end users tend to be highly sensitive to the weakest link in their judgments of overall print quality. Therefore, a single disturbing defect in the print can heavily reduce the overall print quality impression, although the print looks great in all other aspects.

When acquiring paper or board, the printer buys printability thus providing the printing press with the prerequisites to produce a good result. A good printability implies good ink transfer from printer to substrate, a good ink-paper interaction and finally a good ink-on-paper appearance. Since a printed page is the product of a printing system, it is essential to measure print quality.

As most printed products are aimed for people to read and view, it is reasonable to assume that a definition of print quality in some way should be related to the human visual and perceptive system. Therefore, subjective evaluation is desirable since it gives a quality judgment from one or more persons actually viewing the print. Nevertheless, subjective evaluations tend to suffer from fluctuations caused by numerous factors such as viewing conditions, past experiences of the observers and observer fatigue. In addition, the use of observer panels is an expensive and time-consuming procedure. Therefore, there is arguably a need for objective instrumental-based quality measurements, although there are systematic and analytic methods to extract quantitative measures from subjective evaluations.

Objective measures of print quality are quantitative and obtained by physically measuring the printed substrate. There are certain requirements for good objective print quality measurements; they should be physically well defined, reproducible and have a good repeatability. Preferably, they should be possible to connect with physical paper properties in order to find the cause of a print defect, since print quality always can be related to the printing substrate.

In this presentation, examples will be given on methods and instruments used in the paper industry to carry out objective measurements of visible print quality features. Spectrophotometers and scanner- or camera-based systems, which utilize various image analysis algorithms to measure print quality attributes are the most frequently used tools to obtain objective print quality measurements. In addition, examples will be given on imaging of other paper properties related to printability such as surface structure, formation and thickness of the coating layer.

What: RFID MAC Protocols
Who: Associate Prof. Chaewoo Lee, ECE dept. Ajou University, South Korea
When: 4.12. klo 10:00
Where:2D309

Abstract:

Radio-frequency identification (RFID) is an automatic identification method, relying on storing and remotely retrieving data using devices called RFID tags or transponders. In this seminar, after a brief introduction to RFID technology, we explain RFID MAC protocols and discuss the most important issue in RFID, i.e. Anti-collision problem. Since most of the cheap RFID tags are of passive type, the data sent by them are prone to collide each other. And furthermore, RFID readers themselves may also collide. To improve efficiency of RFID systems, it is crucial to reduce the collision and the methods that reduce the collision are called anti-collision. To discuss the problem, we review some of the most important anti-collision algorithms shown the research papers and standards, then we discuss the performance of our anti-collision algorithms in detail.

What: A Tale of Two Studies: Hybrid Gaze-Contingent Rendering and EyeWrite
Who: Andrew T. Duchowski, School of Computing, Clemson University
When: 14.12. klo 15:15
Where:2D106

Abstract:

Results from two studies are presented. The first is a passive interactive study where a nonisotropic hybrid gaze-contingent display (GCD) is tested during visual search. Results suggest an inverse relationship between the gaze-contingent display's inset size and mean search time, a trend consistent with existing techniques. When degrading geometry, maintenance of a target's silhouette edges appears to decrease search times. Post-hoc analysis, however, suggests a point of diminishing returns with an inset larger than 15 when target discrimination is a component of visual search. The second is an active gaze-based application developed for gestural eye-typing, EyeWrite. Results from a longitudinal study comparing EyeWrite with an on-screen keyboard indicate that EyeWrite's inherent multi-stroke handicap (4.52 saccades per character, frequency-weighted average) is sufficient for the on-screen keyboard to edge out EyeWrite in speed performance. Eye-typing speeds with EyeWrite approached 5 wpm on average (8 wpm by proficient users), whereas keyboard users achieved about 7 wpm on average (in line with previous results). However, EyeWrite users left significantly fewer uncorrected errors in the final text, with no significant difference in the number of errors made during entry and correction, indicating a speed-accuracy tradeoff. Subjective results indicate that participants felt that EyeWrite was significantly faster, easier to use, and caused less ocular fatigue than the on-screen keyboard.