QuickSearch:   Number of matching entries: 0.


BibTeX:
@mastersthesis{Oevern2007,
  author = {Jon Anders {\O}vern},
  title = {Film restoration using extended ACE},
  school = {Gj{\o}vik University College},
  year = {2007}
}
Abstract: Our work particularly focus’s on the quality of spectral colour image reproduction of the printing world, more specifically on the spectral printer modelling. There has been different spectral printer models developed so far. The spectral Neugebauer model is the spectral printer model which is the generalizationof the Murray–Davies equations to include more thantwo reflectances. In this model the reflectance of the output from the printer can be computed as a convex combination of Neugebauer Primaries.

To get these reflectances of Neugebauer Primaries one has to find out the way to print out them so that he/she could measure the reflectances. It has the following problems. The number of patches we are going to print increases exponentially with the number of colorants of the printer since the number of possible combination of colorants is given by 2n , where n is the number of colorants of the printer. For Spectral printer with more than 7 or 8 colorants, printing all those Neugebauer Primaries and measure them will consume lots of time and materials. The other problem is that finding out the way to really print out all possible combinations is very difficult. There are some charts which incorporate all Neugebauer Primaries of CMYK printer. The problem will begin when we consider printers with more colorants than these 4. There are no charts which actually incorporate Neugebauer Primaries of 8 channel printer. In this case the researcher forced to have the SDK of the printer and find out some way to order the printer to print anything he/she wants.

In this thesis we tested Kubelka-Monk theory in order to estimate those Neugebauer Primaries so that we could save our resources, power and time. We used different kinds of papers for test and for recommending more cheapest and accurate way of using this theory. Then we spectrally model the CMYK HP Deskjet 1220C printer and Xerox phaser 7760 laser printer using spectral Neugebauer model and Yule-Nielsen modified spectral Neugebauer (YNSN) model. Using these models we show that how much more improvements we could actually get by estimating Neugebauer Primaries using Kubelka-monk theory rather than using real measurements of the Neugebauer Primaries.

BibTeX:
@mastersthesis{Assefa2010,
  author = {Mekides Assefa Abebe},
  title = {Kubelka Munk Theory for Time Saving Spectral Printer Modelling},
  school = {Gj{\o}vik University College},
  year = {2010},
  url = {http://www.hig.no/content/download/28550/327658/file/Mekides.pdf}
}
BibTeX:
@techreport{Assefa2010a,
  author = {Mekides Assefa Abebe},
  title = {Literature Review Report. Current and Future Eye Tracking Experiments on Web and Print Document Feature Attractiveness},
  year = {2010},
  number = {9},
  url = {http://brage.bibsys.no/hig/handle/URN:NBN:no-bibsys_brage_14938}
}
Abstract: In the context of spectral color image reproduction by multi-channel inkjet printing a key challenge is to accurately model the colorimetric and spectral behavior of the printer. A common approach for this modeling is to assume that the resulting spectral reflectance of a certain ink combination can be modeled as a convex combination of the so-called Neugebauer Primaries (NPs); this is known as the Neugebauer Model. Several extensions of this model exist, such as the Yule-Nielsen Modified Spectral Neugebauer (YNSN) model. However, as the number of primaries increases, the number of NPs increases exponentially; this poses a practical problem for multi-channel spectral reproduction. In this work, the well known Kubelka-Munk theory is used to estimate the spectral reflectances of the Neugebauer Primaries instead of printing and measuring them, and subsequently we use these estimated NPs as the basis of our printer modeling. We have evaluated this approach experimentally on several different paper types and on the HP Deskjet 1220C CMYK inkjet printer and the Xerox Phaser 7760 CMYK laser printer, using both the conventional spectral Neugebauer model and the YNSN model. We have also investigated a hybrid model with mixed NPs, half measured and half estimated. Using this approach we find that we achieve not only cheap and less time consuming model establishment, but also, somewhat unexpectedly, improved model precision over the models using the real measurements of the NPs.
BibTeX:
@inproceedings{Abebe2011,
  author = {Mekides Assefa Abebe and Jeremie Gerhardt and Jon Yngve Hardeberg},
  title = {Kubelka-Munk theory for efficient spectral printer modeling},
  booktitle = {Color Imaging XVI: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2011},
  series = {Proceedings of SPIE-IS\&T Electronic Imaging},
  volume = {7866},
  pages = {786614}
}
Abstract: In this report we are going to develop and analyze two new image difference metrics. We will focus on the SCIELABJOHNSON framework and in particular on the spatial filtering, which will be substituted by the Difference of Gaussians model.
Two research questions have been formulated as the basis for this thesis:
• How can we improve existing image difference metrics ?
• What is the performance of the Difference of Gaussians model in image difference metrics ?
A first test on a set of gamut mapped images will give an idea of the performance in correlation of the two metrics. A second experiment will be performed on images with single and multiple variations of contrast, lightness, and saturation. The performance in correlation will be given using data from a psychophysical experiment. Results will show that the proposed metrics have a low performance on the dataset of gamut mapped images and they do not seem to be appropriate for the second dataset as well.
We will also demonstrate through the experiments the importance of spatial filtering for color image difference metrics, and also that the configuration of these two metrics should change according to the type of distorted images to ensure better performance.
BibTeX:
@mastersthesis{Ajagamelle2009a,
  author = {Sebastien Ajagamelle},
  title = {Analysis of the Difference of Gaussians Model in Perceptual Image Difference Metrics},
  school = {Gj{\o}vik University College and Grenoble Institute of Technology},
  year = {2009},
  url = {http://colorlab.no/content/download/25454/270990/file/Sebastien_Ajagamelle_Master_thesis.pdf}
}
Abstract: The goal of this work is to present and review two new image difference metrics, named SDOG −CIELAB and SDOG −DEE. These metrics are along the same lines as the standard SCIELAB metric (Zhang and Wandell, 1997), modified to include a pyramidal subsampling, the Difference of Gaussians receptivefield model (DOG) (Tadmor and Tolhurst, 2000), and the �EE color-difference formula (Oleari et al., 2009). The DOG model and the �EE formula have been shown to improve respectively contrast measures and image quality metrics (Simone et al., 2009). Extensive testing using 29 state-of-the-art metrics and six image databases has been performed. Although this new approach is promising, we only find weak evidence of effectiveness. Analysis of the results indicates that the metrics show fairly good correlations over particular test images, yet they do not outperform the most common objective quality
measures.
BibTeX:
@inproceedings{Ajagamelle2010,
  author = {Sebastien Akli Ajagamelle and Marius Pedersen and Gabriele Simone},
  title = {Analysis of the Difference of Gaussians Model in Image Difference Metrics},
  booktitle = {5th European Conference on Colour in Graphics, Imaging, and Vision (CGIV)},
  address = {Joensuu, Finland},
  month = {June},
  year = {2010},
  pages = {489--496}
}
BibTeX:
@conference{Ajagamelle2009,
  author = {Sebastien Akli Ajagamelle and Gabriele Simone and Marius Pedersen},
  title = {Performance of the Difference of Gaussians Model in Image Difference Metrics},
  booktitle = {Proceedings from Gj{\o}vik Color Imaging Symposium 2009},
  address = {Gj{\o}vik, Norway},
  month = {Jun},
  year = {2009},
  series = {H{\o}gskolen i Gj{\o}viks rapportserie},
  number = {4},
  pages = {27-30},
  url = {http://brage.bibsys.no/hig/bitstream/URN:NBN:no-bibsys_brage_9313/3/sammensatt_elektronisk.pdf}
}
BibTeX:
@article{Alsam2008a,
  author = {Ali Alsam and David Connah},
  title = {Optimal bases for convex color mixture},
  month = {Feb},
  journal = {J. Opt. Soc. Am. A},
  year = {2008},
  volume = {25},
  pages = {3}
}
BibTeX:
@inproceedings{Alsam2005b,
  author = {Ali Alsam and David Connah},
  title = {Recovering Natural Reflectances with Convexity},
  booktitle = {Proceedings of the 10th Congress of the International Colour Association},
  address = {Granada, Spain},
  year = {2005},
  pages = {1677-1680},
  note = {ISBN 84-609-5164-2}
}
Abstract: We present a computationally efficient, artifact-free, spatial gamut mapping algorithm. The proposed algorithm offers a compromise between the colorimetrically optimal gamut clipping and an ideal spatial gamut mapping. This is achieved by the iterative nature of the method: At iteration level zero, the result is identical to gamut clipping. The more we iterate the more we approach an optimal spatial gamut mapping result. Our results show that a low number of iterations, 20-30, is sufficient to produce an output that is as good or better than that achieved in previous, computationally more expensive, methods. More importantly, we introduce a new method to calculate the gradients of a vector valued image by means of a projection operator which guarantees that the hue of the gamut mapped colour vector is identical to the original. Furthermore, the algorithm results in no visible halos in the gamut mapped image a problem which is common in previous spatial methods. Finally, the proposed algorithm is fast- Computational complexity is O(N), N being the number of pixels. Results based on a challenging small destination gamut supports our claims that it is indeed efficient.
BibTeX:
@inproceedings{Alsam2012,
  author = {Ali Alsam and Ivar Farup},
  title = {Spatial colour gamut mapping by orthogonal projection of gradients onto constant hue lines},
  booktitle = {8th International Symposium on Visual Computing},
  address = {Rethymnon, Crete, Greece},
  month = {July},
  publisher = {Springer},
  year = {2012},
  series = {LNCS},
  volume = {7431},
  pages = {556--565},
  url = {http://www.springerlink.com/content/7038j70047r28585/}
}
Abstract: We present a novel, computationally efficient, iterative, spatial gamut mapping algorithm. The proposed algorithm offers a compromise between the colorimetrically optimal gamut clipping and the most successful spatial methods. This is achieved by the iterative nature of the method. At iteration level zero, the result is identical to gamut clipping. The more we iterate the more we approach an optimal, spatial, gamut mapping result. Optimal is defined as a gamut mapping algorithm that preserves the hue of the image colours as well as the spatial ratios at all scales. Our results show that as few as five iterations are sufficientto produce an output that is as good or better than that achieved in previous, computationally more expensive, methods. Being able to improve upon previous results using such low number of iterations allows us to state that the proposed algorithm is O(N), N being the number of pixels. Results based on a challenging small destination gamut supports our claims that it is indeed efficient.
BibTeX:
@inproceedings{Alsam2009,
  author = {Ali Alsam and Ivar Farup},
  title = {Colour Gamut Mapping as a Constrained Variational Problem},
  booktitle = {16th Scandinavian Conference on Image Analysis},
  address = {Oslo, Norway},
  month = {Jun},
  year = {2009},
  series = {Lecture Notes in Computer Science},
  volume = {5575},
  pages = {109--118},
  url = {http://www.springerlink.com/link.asp?id=105633}
}
Abstract: We present a computationally efficient, artifact-free, spatial colour gamut mapping algorithm. The proposed algorithm offers a compromise between the colorimetrically optimal gamut clipping and an ideal spatial gamut mapping. It exploits anisotropic diffusion to reduce the introduction of halos often appearing in spatially gamut mapped images. It is implemented as an iterative method. At iteration level zero, the result is identical to gamut clipping. The more we iterate the more we approach an optimal, spatial gamut mapping result. Our results show that a low number of iterations, 10–20, is sufficient to produce an output that is as good or better than that achieved in previous, computationally more expensive, methods. The computational complexity for one iteration is O(N), N being the number of pixels. Results based on a challenging small destination gamut supports our claims that it is indeed efficient.
BibTeX:
@inproceedings{Alsam2011,
  author = {Ali Alsam and Ivar Farup},
  title = {Spatial Colour Gamut Mapping by Means of Anisotropic Diffusion},
  booktitle = {Proceedings of the Third International Workshop on Computational Color Imaging (CCIW)},
  address = {Milano, Italy},
  month = {Apr},
  publisher = {Springer},
  year = {2011},
  series = {Lecture Notes in Computer Science},
  volume = {6626},
  pages = {113-124},
  doi = {10.1007/978-3-642-20404-3_9}
}
BibTeX:
@article{Alsam2008,
  author = {Ali Alsam and Graham Finlayson},
  title = {Integer Programming for Optimal Reduction of Calibration Targets},
  journal = {Color, Research \& Application},
  year = {2008},
  volume = {33},
  number = {3},
  pages = {212--220}
}
BibTeX:
@article{Alsam2007a,
  author = {Ali Alsam and Graham Finlayson},
  title = {Metamer Sets without Spectral Calibration},
  journal = {JOSA A},
  year = {2007},
  volume = {24},
  pages = {2505-2512}
}
Abstract: Calibration charts are used in colour imaging to determine color correction transforms and for spectrally characterising imaging devices. Traditionally, quite complex charts have evolved as it was reasoned that the more reflectances in a chart the more the chart could represent all other reflectances. However, a chart with many reflectances is both expensive, difficult and tedious to use. The difficulty lies in assuming constant lighting conditions over the whole chart and the tedium appears when the chart must be measured using a spectrophotometer. To circumvent these problems researchers have sought methods to find smaller sets of reflectances which, in some sense, represent larger reflectance sets. In this paper we develop an iterative selection procedure where we select individual reflectances from a colour chart. The first is chosen so it best accounts for the majority of the spectral variance. The next best accounts for the variance that is left. In general the ith selected chart reflectance best accounts for the variance among reflectances (given that 1 reflectances are already selected). We show that this procedure is weakly optimal and as such compares with prior art which chooses reflectances using simple heuristics. The new method is also much faster than algorithms that are built on stronger optimality conditions. Experiments demonstrate that our new method represents a reasonable compromise between fast (and feasible) reflectance selection and the optimality of the chosen set.
BibTeX:
@inproceedings{Alsam2006b,
  author = {Ali Alsam and Graham Finlayson},
  title = {Reducing the Number of Calibration Surfaces},
  booktitle = {Fourteenth Color Imaging Conference},
  address = {Scottsdale, Arizona, USA},
  month = {Nov},
  year = {2006},
  pages = {170-174},
  note = {ISBN / ISSN: 0-89208-291-7}
}
Abstract: Solving for a camera's sensors based on its response to the surfaces of a calibration target is an ill-conditioned problem with an infi nite number of possible solutions. To obtain a stable estimate we need to control the solution space by constraining the sensors to match some known physical characteristics e. g. sensors are normally constrained to be positive. The use of constraints limits the uncertainty encountered in sensor recovery and results in im-proved estimates. Unfortunately, it is not possible to know which exact constraints should be used in recovering an unknown sensor. In this paper we present a method to estimate the support (the region where the sensor's sensitivity is not zero) of a sensor prior to recovering it. If the sensor's support is limited this constraint is very stringent and imposing it on the solution space results in a clear reduction in the uncertainty encountered in the solution. In the results section we show that it is indeed possible to recover a sensor's bandwidth based on its response to a set of reflectances.
BibTeX:
@inproceedings{Alsam2004a,
  author = {Ali Alsam and Graham Finlayson},
  title = {Estimating the Bandlimits of an Unknown Sensor},
  booktitle = {Twelfth Color Imaging Conference: Color Science and Engineering Systems, Technologies, Applications},
  address = {Scottsdale, AZ, USA},
  month = {Nov},
  year = {2004},
  pages = {217-222},
  note = {ISBN / ISSN: 0-89208-254-2}
}
BibTeX:
@inproceedings{Alsam2005,
  author = {Ali Alsam and Jeremie Gerhardt and Jon Yngve Hardeberg},
  title = {Inversion of the Spectral Neugebauer Printer model},
  booktitle = {AIC Colour 05},
  month = {May},
  year = {2005},
  pages = {44--62}
}
Abstract: Calibration targets are widely used to characterize imaging devices and estimate optimal proles to map the response of one device to the space of another. The question addressed in this paper is that of how many surfaces in a calibration target are needed to account for the whole target perfectly. To accurately answer this question we first note that the reflectance spectra space is closed and convex. Hence the extreme points of the convexhull of the data encloses the whole target. It is thus sufficientto use the extreme points to represent the whole set. Further, we introduce a volume projection algorithm to reduce the extremes to a user defined number of surfaces such that the remaining surfaces are more important, i.e. account for a larger number of surfaces, than the rest. When testing our algorithm using the Munsell book of colors of 1269 reflectances we found that as few as 110 surfaces were sufficientto account for the rest of the data and as few as 3 surfaces accounted for 86% of the volume of the whole set.
BibTeX:
@inproceedings{Alsam2005a,
  author = {Ali Alsam and Jon Yngve Hardeberg},
  title = {Convex reduction of calibration charts},
  booktitle = {Color Imaging X: Processing, Hardcopy, and Applications},
  address = {San Jose, California},
  month = {Jan},
  year = {2005},
  pages = {38-46},
  note = {ISBN / ISSN: 0-8194-5640-3}
}
Abstract: The gamut of a colour space is defi ned by a number of extreme points. The best inks to achieve an accurate spectral reproduction of a given target are those which span the tar-get's spectral gamut. Using a modi fied non-negative matrix factorization (NMF) algorithm we derive m colorants and their spectral curves such that they are the extreme points of the targets gamut. Using the spectral Neugebauer printing model where eight colorants are assumed we com-pare our new method with existing techniques. Comparison with a set of optimal rotated principal vectors as well as the classical NMF clearly shows that the performance of the new method is superior.
BibTeX:
@inproceedings{Alsam2004,
  author = {Ali Alsam and Jon Yngve Hardeberg},
  title = {Optimal Colorant Design for Spectral Colour Reproduction},
  booktitle = {Twelfth Color Imaging Conference: Color Science and Engineering Systems, Technologies, Applications},
  address = {Scottsdale, AZ, USA},
  month = {Nov},
  year = {2004},
  pages = {157-162},
  note = {ISBN / ISSN: 0-89208-254-2}
}
BibTeX:
@inproceedings{Alsam2004b,
  author = {Ali Alsam and Jon Yngve Hardeberg},
  title = {Smoothing Jagged Spectra for Accurate Spectral Sensitivities Recovery},
  booktitle = {Proc. International Conference on Computer Vision and Graphics},
  year = {2004}
}
BibTeX:
@inproceedings{Alsam2004c,
  author = {Ali Alsam and Jon Yngve Hardeberg},
  title = {Metamer Set Based Measures of Goodness for Colour Cameras},
  booktitle = {Proc. International Conference on Computer Vision and Graphics},
  year = {2004}
}
Abstract: We propose a colour to greyscale algorithm providing colour separation as well as edge and texture enhancement. An image dependent grey-axis is computed based on the colour distribution in the image. An initial greyscale image is created by a point-wise operation where the grey value is the magnitude of the RGB coordinates re-mapped to the grey axis. The resulting greyscale image is enhanced by applying a novel correction mask. This mask, resembling an unsharp mask, is the sum of the difference between each of the colour components and a blurred version of the greyscale image. The resulting greyscale images are rich in detail without undesirable artifacts.
BibTeX:
@inproceedings{Alsam2006a,
  author = {Ali Alsam and {\O}yvind Kol\r{a}s},
  title = {Grey Colour Sharpening},
  booktitle = {Fourteenth Color Imaging Conference},
  address = {Scottsdale, Arizona, USA},
  month = {Nov},
  year = {2006},
  pages = {263-267},
  note = {ISBN / ISSN: 0-89208-292-5}
}
BibTeX:
@article{Alsam2007,
  author = {Ali Alsam and Reiner Lenz},
  title = {Calibrating Color Cameras using Metameric Blacks},
  journal = {JOSA A},
  year = {2007},
  volume = {24},
  number = {1},
  pages = {11-17}
}
Abstract: Spectral calibration of digital cameras based on the spectral data of commercially available calibration charts is an ill-conditioned problem which has an infinite number of solutions. To improve upon the estimate, different constraints are commonly employed. Traditionally such constraints include: non-negativity, smoothness, uni-modality and that the estimated sensors results in as good as possible response fit.
In this paper, we introduce a novel method to solve a general ill-conditioned linear system with special focus on the solution of spectral calibration. We introduce a new approach based on metamerism. We observe that the difference between two metamers (spectra that integrate to the same sensor response) is in the null-space of the sensor. These metamers are used to robustly estimate the sensor's null-space. Based on this null-space, we derive projection operators to solve for the range of the unknown sensor. Our new approach has a number of advantages over standard techniques: It involves no minimization which means that the solution is robust to outliers and is not dominated by larger response values. It also offers the ability to evaluate the goodness of the solution where it is possible to show that the solution is optimal, given the data, if the calculated range is one dimensional.

When comparing the new algorithm with the truncated singular value decomposition and Tikhonov regularization we found that the new method performs slightly better for the training set with noticeable improvements for the test data.

BibTeX:
@inproceedings{Alsam2006,
  author = {Ali Alsam and Reiner Lenz},
  title = {Calibrating Color Cameras Using Metameric Blacks},
  booktitle = {CGIV 2006 -- Third European Conference on Color in Graphics, Imaging and Vision},
  address = {Leeds, UK},
  year = {2006},
  pages = {75-80},
  note = {ISBN / ISSN: 0-89208-262-3}
}
Abstract: The novel field of aesthetic quality inferencing of natural images deals with the automatic assessment of the aesthetic value of a given photograph, by either numerically rating it, or by classifying it as a professional, high-level picture or as a low quality snapshot. Based on the extraction of low-level features, a number of authors have tried to bridge the aesthetic gap given by the inherently subjective nature of aesthetics and, using machine learning techniques, set the basis for the development of a field with potential applications in areas as diverse as CBIR, management and editorial work or consumer photography. Their methods range from the opaque, black-box approach to more content-aware procedures, which define their feature set after well-established photographic techniques and build their success upon the prior identification of the photographic subject.

Our approach aims to go one step further in the understanding of the image content and, under the assumption that different subject categories require different composition techniques, introduces an additional scene type classification step, which, combined with the use of state of the art feature sets, should yield a significant improvement over the current performance results.

BibTeX:
@mastersthesis{Alvarez2010,
  author = {Aitor Alvarez},
  title = {Scene recognition for improved aesthetic quality inference of photographic images},
  school = {Gj{\o}vik University College},
  year = {2010},
  keywords = {Aesthetics, image quality, aesthetic quality, photography, image classification, pattern recognition, image processing, imaging}
}
Abstract: In the last couple of years a huge amount of work has been shifted from works on Image Quality Assessment to Video Quality Assessment. Although some metrics have started to take the temporal aspects of videos into account but still most metrics are focusing on the spatial distortions in videos or in other words applying image quality metrics on individual frames. Also till now most metrics are focusing on the Quality of Service (QOS) rather than the Quality of Experience (QOE). With respect to the mentioned factors we believe that there is a need for a new metric which has a Spatial-Temporal approach and takes QOE into account and so the proposed metric is based on these two main approaches. Because of the spatial-temporal approach we had the metric was named as STAQ (Spatial-Temporal Assessment of Quality).
Our proposed method is based on the fact that the Human Visual System (HVS) is sensitive to sharp changes in videos. Keeping this in mind we could reach the conclusion that there will be matching regions in consecutive frames. We took advantage of this point and found these regions and used a Full Reference Image Quality Metric to evaluate the quality of these frames. We also used five different Motion Activity Density groups to evaluate the amount of motion in the video. Our final score was later pooled based on five different pooling functions each representing one of the motion activity groups. In other words we used QOE or information from subjective evaluation for playing a controlling factor role in our method.
When the proposed reduced reference metric is compared to ten different state of the art full reference metrics the results show a great improvement in the case of H.264 compressed videos compared to other state of the art metrics. We also reached good results in the case of MPEG-2 compressed videos and videos affected by IP distortion. With respect to the results achieved we could claim that the metric introduced is among the best metrics so far and has especially made a huge progress in the case of H.264 compressed videos.
BibTeX:
@mastersthesis{Amirshahi2010,
  author = {Ali Amirshahi},
  title = {Towards a perceptual metric for video quality assessment},
  school = {Gj{\o}vik University College},
  year = {2010}
}
Abstract: This thesis work with gamut mapping and the experiment is based on the TC 8-03 from CIE. The start of the paper is a state-of-the-art. Four algorithms for gamut mapping is tested, chroma-dependent sigmoidal lightness mapping and cusp knee scaling (SGCK), hueangle preserving minimum deltaE*ab clipping (Clip), GAMMA and a combination of SGCK and Clipping (SGCKC). The mapping is carried out from sRGB to two destination mediums, a HP Color LaserJet 4550 PS and CPS700 from Oce. After combining the results for both experiments, SGCKC is significantly better than SGCK and much better than GAMMA and Clip. SGCKC give better saturation and as good details as SGCK. Clip had the strongest saturations, but many details were clipped off, which results in artefacts. GAMMA had the darkest pictures, but had fewer artefacts.
BibTeX:
@mastersthesis{Amsrud2003,
  author = {Morten Amsrud},
  title = {Forbedring og evaluering av algoritmer for fargeomfangstilpasning},
  school = {Gj{\o}vik University College},
  year = {2003},
  url = {http://www.nada.kth.se/utbildning/grukth/exjobb/rapportlistor/2003/rapporter03/amsrud_morten_03164.pdf}
}
Abstract: In this paper we present a colorimetric characterization method for digital color cameras, based on hue plane and white point preservation. The present implementation of the method incorporates a series of 3 by 3 matrices, each responsible for the transformation of a subset of camera RGB-values to colorimetric XYZ-values. The method is compared to a choice of three other common characterization methods based on least squares fitting. These other methods are an unconstrained 3 by 3 matrix, a white point preserving 3 by 3 matrix and a second order polynomial.
The methods have been evaluated on real camera signals coming from an Imacon Ixpress professional digital CCD camera, under flash light. The Gretag MacBeth Color Checker and the Color Checker DC charts have been used as test set and training set (respectively). The method is evaluated in combination with a noise susceptibility estimation of the training set samples and a preliminary subdivision of the hue domain, that reduces the amount of test samples needed in the characterization. The noise estimation is based on a geometric analysis in camera chromaticity space.
BibTeX:
@inproceedings{Andersen2005,
  author = {Casper Find Andersen and Jon Yngve Hardeberg},
  title = {Colorimetric Characterization of Digital Cameras Preserving Hue Planes},
  booktitle = {Thirteenth Color Imaging Conference},
  address = {Scottsdale, Arizona, USA},
  month = {Nov},
  year = {2005},
  pages = {141-146},
  note = {ISBN / ISSN: 0-89208-259-3}
}
BibTeX:
@inproceedings{Andersen2005a,
  author = {Casper Find Andersen and Jon Yngve Hardeberg},
  title = {Hue plane preserving colorimetric characterization of digital cameras},
  booktitle = {Proceedings of the 10th Congress of the International Colour Association},
  address = {Granada, Spain},
  month = {May},
  year = {2005},
  pages = {287-290},
  note = {ISBN 84-609-5163-4}
}
Abstract: We present subjective evaluations of example-based regualrization, total variation regularization, and a proposed joint example-based and total variation regularization for image estimation problems. We focus on the noisy deblurring problem, which generalizes image superresolution and denoising. Controlled subjective experiments show that the proposed joint regularization can yield signicant improvement over only using total variation or example-based regularization, particularly when the example images contain similar structural elements as the test image. We also investigate whether the regularization parameters can be trained by cross-validation, and the dierence in cross-validation judgments made by humans or by fully automatic image quality metrics. Experiments show that of ve image quality metrics tested, the structural similarity index (SSIM) correlates best with human judgement of image quality, and can be probably used to cross-validate regularization parameters. However, there is a signicant quality gap depending on whether the parameters are cross-validated by humans or with the best image quality metric.
BibTeX:
@inproceedings{Anderson2012,
  author = {Hyrum S. Anderson and Maya R. Gupta and Jon Yngve Hardeberg},
  title = {Subjective evaluations of example-based, total variation, and joint regularization for image processing},
  booktitle = {Computational Imaging X: Enhancement, Denoising, and Restoration II},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2012},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {8296},
  pages = {8296-26},
  url = {http://brage.bibsys.no/hig/handle/URN:NBN:no-bibsys_brage_28049}
}
BibTeX:
@techreport{Anderson2011,
  author = {Hyrum S. Anderson and Maya R. Gupta and Jon Y. Hardeberg},
  title = {Subjective Evaluations of Example-based, Total Variation, and Combined Regularization for Image Processing},
  year = {2011},
  number = {5},
  url = {https://www.ee.washington.edu/techsite/papers/documents/UWEETR-2011-0005.pdf}
}
BibTeX:
@inproceedings{Anderson2009,
  author = {Hyrum S. Anderson and Jon Yngve Hardeberg and Maya R. Gupta},
  title = {Full Reference Image Quality Metrics for Optimizing Example-based Total Variation Deblurring},
  booktitle = {Proceedings from Gj{\o}vik Color Imaging Symposium 2009},
  address = {Gj{\o}vik, Norway},
  month = {Jun},
  year = {2009},
  series = {H{\o}gskolen i Gj{\o}viks rapportserie},
  number = {4},
  pages = {38-44},
  url = {http://brage.bibsys.no/hig/bitstream/URN:NBN:no-bibsys_brage_9313/3/sammensatt_elektronisk.pdf}
}
BibTeX:
@mastersthesis{Arief2011,
  author = {Ibrahim Arief},
  title = {A Novel Shadow-based Illumination Estimation method for Mobile Augmented Reality System},
  school = {Gj{\o}vik University College},
  year = {2011}
}
BibTeX:
@inproceedings{Arief2012,
  author = {Ibrahim Arief and Simon McCallum and Jon Yngve Hardeberg},
  title = {Realtime Estimation of Illumination Direction for Augmented Reality on Mobile Devices},
  booktitle = {Color and Imaging Conference},
  address = {Los Angeles, CA, USA},
  month = {Nov},
  publisher = {IS\&T and SID},
  year = {2012},
  pages = {111--116}
}
Abstract: Color image quality is an important factor in various media such as digital cameras, displays and printing systems. The employment of different color imaging media leads to a constant problem that color reproduction from each medium produces color differently. It makes all different manufactories to focus on the technology to achieve successful cross-media reproduction. In this case the image reproduction quality depends on processes of device characterization. The device characterization and profiling are central processes which allow predicting a result of device reproduction according to the known input and provide communication of devices. The Look-up tables (LUTs) are the most common empirical approach for device characterization, and are the basis for ICC profiles. Smooth LUTs-based color conversion in device characterization is an important factor for achieving high quality of the reproduced color image. Such factors as LUTs size, interpolation methods and unavoidable noise in color measurement process and unstable printing process influence on smoothness of LUTs-based color transforms, and result in appearing artifacts on final reproduced images.

The main goal of this project is to find a way of quantifying smoothness of color transforms through analysis of LUTs based device characterization process and the factors which affect on it and test different image quality metrics. It requires conducting a number of experiments for determining and estimating thresholds of factors for predicting unsmooth transform and corresponding human perceptual evaluation of smoothness of color transforms in comparison with quantitative evaluation results provided by different image quality metrics. The evaluation smoothness of LUTs-based color transforms will allow to avoid undesirable results in image color reproduction systems such as artifacts and distortions of image content and improving device characterization process for achieving smooth color transform. It is an attempt to continue work in direction of getting qualitative reproduction of color images fast and economically and developing technology advances for obtaining it.

BibTeX:
@mastersthesis{Aristova2010,
  author = {Anna Aristova},
  title = {Smoothness of color transforms},
  school = {Gj{\o}vik University College},
  year = {2010},
  keywords = {device characterization, calibration, color conversion, LUT, look up table, image color quality, smoothness, ICC profiles, color management},
  url = {http://www.hig.no/content/download/28545/327643/file/Anna.pdf}
}
Abstract: Multi-dimensional look up tables (LUTs) are widely employed for color transformations due to its high accuracy and general applicability. Using the LUT model generally involves the color measurement of a large number of samples. The precision and uncertainty of the color measurement will be mainly represented in the LUTs, and will affect the smoothness of the color transformation. This, in turn, strongly influences the quality of the reproduced color images. To achieve high quality color image reproduction, the color transformation is required to be relatively smooth. In this study, we have investigated the inherent characteristics of LUTs' transformation from color measurement and their effects on the quality of reproduced images. We propose an algorithm to evaluate the smoothness of 3D LUT based color transformations quantitatively, which is based on the analysis of 3D LUTs transformation from RGB to CIELAB and the second derivative of the differences between adjacent points in vertical and horizontal ramps of each LUT entry. The performance of the proposed algorithm was compared with a those proposed in two recent studies on smoothness, and a better performance is reached by the proposed method.
BibTeX:
@inproceedings{Aristova2011,
  author = {Anna Aristova and Zhaohui Wang and Jon Yngve Hardeberg},
  title = {Evaluating the smoothness of color transformations},
  booktitle = {Color Imaging XVI: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2011},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {7866},
  pages = {78660M}
}
BibTeX:
@mastersthesis{Azarijafari2013,
  author = {Parinaz Azarijafari},
  title = {A mobile application proposed to provide foundation recommendation to women based on their face skin color employing color science},
  school = {Gj{\o}vik University College},
  year = {2013}
}
Abstract: Production and proofing substrates often differ in their white points. Substrate white points frequently differ between reference and sample, for example between proof and print, or between a target paper colour and an actual production paper. It is possible to generate characterization data for the printing process on the production side to achieve an accurate colorimetric match but in many cases it is not practical to generate this data empirically by printing samples and measuring them1. This approach however, does not account for any degree of adaptation between the differing substrate white points whereas its acceptability may depend on accounting for the change in paper colour such that appearance preservation of the original when printed on the production substrate is attained.
BibTeX:
@inproceedings{Baah2013,
  author = {Kwame Baah and Phil Green and Michael Pointer},
  title = {Perceived acceptability of colour matching for changing substrate white point},
  booktitle = {Color Imaging XVIII: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2013},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {8652},
  pages = {86520Q},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=1568786}
}
BibTeX:
@article{Badano2014,
  author = {A. Badano and C. Revie and D. Treanor and T. Heki and C. Sisson and S. Skrøvseth and A. Casertano and T. Kimpe and E. Krupinski and P. J. Green and others,},
  title = {Current challenges in the handling of color in medical imaging},
  journal = {Journal of Digital Imaging},
  year = {2014},
  doi = {10.1007/s10278-014-9721-0}
}
Abstract: People who have an interest in the field of color imaging, sometimes find that they need to work on the spectral power distributions of surface refiectances. Traditional color models and color spaces are unable to accurately take into account the effects that different illuminates have on the perception of the color of a surface, whereas spectral based calculations perform better at this task. The spectral power distributions are usually represented by samples taken at a number of wavelengths. Up to 31 components are used to describe a spectrum, in order to preserve the desired level of detail. The high dimensionality of such data sets is inconvenient, since people are unable to easily analyze such data with regard to certain questions. This includes the task of deciding whether a spectrum is reproducible on a given output device. We plan to introduce a method for the visualization of multispectral color gamuts (the set of colors that a device can reproduce), and analyze how this can be used to find the answer to such questions. In order to do this, we are going to take advantage of existing methods for simplifying sets of data, and review alternatives for the action of comparing spectral colors (spectral match). We expand the concept of reproducibility, and try to determine not only if a given spectral refiectance curve is within the spectral gamut of a device, but also to describe its position relative to the surface of the gamut. As a related result, we suggest a possible method for spectral gamut mapping. This refers to the process of mapping spectral refiectances (e.g. multispectral images) from a source to a specific device gamut, in order to reproduce it on a medium.
BibTeX:
@mastersthesis{Bakke2005,
  author = {Arne Magnus Bakke},
  title = {Visualisering av multispektrale fargedata},
  school = {Gj{\o}vik University College},
  year = {2005},
  url = {http://www.colorlab.no/content/download/21930/215635/file/Arne_Magnus_Bakke_Master_thesis.pdf}
}
Abstract: Gamut boundary determination is an important step in device characterisation and colour gamut mapping. Many different algorithms for the determination of colour gamuts are proposed in the literature. They vary in accuracy, computational efficiency, and complexity of the resulting triangulated gamut surface. Recently, an algorithm called uniform segment visualization (USV) was developed. The gamut surfaces produced by the USV algorithm is more accurate than the ones produced by the the segment maxima algorithm, while at the same time, they are significantly simpler than the ones produced by the somewhat more accurate modified convex hull. In this paper, we propose a new method. First, an accurate gamut boundary is computed using the modified convex hull. The resulting surface is then simplified using an established mesh decimation technique. This results in surfaces that are significantly more accurate than the ones produced by the USV algorithm at a comparable complexity.
BibTeX:
@inproceedings{Bakke2010,
  author = {Arne Magnus Bakke and Ivar Farup},
  title = {Simplified Gamut Boundary Representation Using Mesh Decimation},
  booktitle = {5th European Conference on Colour in Graphics, Imaging, and Vision (CGIV)},
  address = {Joensuu, Finland},
  month = {June},
  year = {2010},
  pages = {459--465}
}
Abstract: Gamut mapping algorithms are currently being developed to take advantage of the spatial information in an image to improve
the utilization of the destination gamut. These algorithms try to preserve the spatial information between neighboring pixels
in the image, such as edges and gradients, without sacrificing global contrast. Experiments have shown that such algorithms
can result in significantly improved reproduction of some images compared with non-spatial methods. However, due to the spatial processing of images, they introduce unwanted artifacts when used on certain types of images. In this paper we perform basic image analysis to predict whether a spatial algorithm is likely to perform better or worse than a good, non-spatial algorithm. Our approach starts by detecting the relative amount of areas in the image that are made up of uniformly colored pixels, as well as the amount of areas that contain details in out-of-gamut areas. A weighted difference is computed from these numbers, and we show that the result has a high correlation with the observed performanceof the spatial algorithm in a previous psychophysical experiment.
BibTeX:
@inproceedings{Bakke2009,
  author = {Arne Magnus Bakke and Ivar Farup and Jon Yngve Hardeberg},
  title = {Predicting the performance of a spatial gamut mapping algorithm},
  booktitle = {Color Imaging XIV: Displaying, Hardcopy, Processing, and Applications},
  address = {San Jose, CA, USA},
  month = {Jan},
  year = {2009},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {7241}
}
Abstract: Several techniques for the computation of gamut boundaries have been presented in the past. In this paper we take an in-depth look at some of the gamut boundary descriptors used when performing today’s gamut mapping algorithms. We present a method for evaluating the mismatch introduced when using a descriptor to approximate the boundary of a device gamut. First, a visually verified reference gamut boundary is created by triangulating the gamut surface using a device profile or a device characterization model. The different gamut boundary descriptor techniques are then used to construct gamut boundaries based on several sets of simulated measurement data from the device. These boundaries are then compared against the reference gamut by utilizing a novel voxel based approach. Results from experiments using several gamut boundary descriptors are presented and analyzed statistically The modified convex hull algorithm proposed by Balasubramian and Dalal performs well for all the different data sets.
BibTeX:
@article{Bakke2010a,
  author = {Arne Magnus Bakke and Ivar Farup and Jon Yngve Hardeberg},
  title = {Evaluation of Algorithms for the Determination of Color Gamut Boundaries},
  month = {Sep},
  journal = {Journal of Imaging Science and Technology},
  year = {2010},
  volume = {54},
  number = {5},
  pages = {050502-(11)}
}
Abstract: We propose a new method for the computation of gamut boundaries, consisting of a combination of the segment maxima gamut boundary descriptor, the modified convex hull algorithm, and a sphere tessellation technique. This method gives a more uniform subdivision of the colour space into segments, and thus a more consistent level of detail over the gamut surface. First, the colour space is divided into segments around a centre point using the triangles from the tessellation algorithm. The measurement points are processed, and the point with the largest radius is found for each non-empty segment. The convex hull algorithm with a preprocessing step is then applied to these maxima points to generate the final gamut surface. The method is tested on different input data, including data sets both with and without internal gamut points. Different numbers of segments are used, and the resulting gamut boundaries are compared with the gamuts constructed using the segment maxima method. A reference gamut is constructed for each device, and the average mismatch is calculated. Our method is shown to perform better than the segment maxima method, particularly for a higher number of segments.
BibTeX:
@conference{Bakke2008,
  author = {Arne Magnus Bakke and Ivar Farup and Jon Yngve Hardeberg},
  title = {Improved gamut boundary determination for color gamut mapping},
  booktitle = {IARIGAI},
  address = {Valencia, Spain},
  month = {Sep},
  year = {2008}
}
Abstract: A method is proposed for performing spectral gamut mapping, whereby spectral images can be altered to fit within an approximation of the spectral gamut of an output device. Principal component analysis (PCA) is performed on the spectral data, in order to reduce the dimensionality of the space in which the method is applied. The convex hull of the spectral device measurements in this space is computed, and the intersection between the gamut surface and a line from the center of the gamut towards the position of a given spectral reflectance curve is found. By moving the spectra that are outside the spectral gamut towards the center until the gamut is encountered, a spectral gamut mapping algorithm is defined. The spectral gamut is visualized by approximating the intersection of the gamut and a 2-dimensional plane. The resulting outline is shown along with the center of the gamut and the position of a spectral reflectance curve. The spectral gamut mapping algorithm is applied to spectral data from the Macbeth Color Checker and test images, and initial results show that the amount of clipping increases with the number of dimensions used.
BibTeX:
@inproceedings{Bakke2005a,
  author = {Arne Magnus Bakke and Ivar Farup and Jon Yngve Hardeberg},
  title = {Multispectral gamut mapping and visualization: a first attempt},
  booktitle = {Color Imaging X: Processing, Hardcopy, and Applications},
  address = {San Jose, California, USA},
  month = {Jan},
  year = {2005},
  pages = {193-200},
  note = {ISBN / ISSN: 0-8194-5640-3}
}
Abstract: Several techniques for the computation of gamut boundaries have been presented in the past. In this paper we take an in-depth look at some of the gamut boundary descriptors used when performing todays gamut mapping algorithms. We present a method for evaluation of the mismatch introduced when using a descriptor to approximate the boundary of a device gamut. First, a visually verified reference gamut boundary is created by triangulating the gamut surface using a device profile or a device characterization model. The different gamut boundary descriptor techniques are then used to construct gamut boundaries based on several sets of simulated measurement data from the device. These boundaries are then compared against the reference gamut by utilizing a novel voxel based approach. Preliminary results from experiments using several gamut boundary descriptors are presented, and the performance of the different algorithms is discussed.
BibTeX:
@inproceedings{Bakke2006,
  author = {Arne Magnus Bakke and Jon Yngve Hardeberg and Ivar Farup},
  title = {Evaluation of Gamut Boundary Descriptors},
  booktitle = {Fourteenth Color Imaging Conference},
  address = {Scottsdale, Arizona, USA},
  month = {Nov},
  year = {2006},
  pages = {50-55},
  note = {ISBN / ISSN: 0-89208-291-7}
}
BibTeX:
@inproceedings{Bakke2009a,
  author = {Arne Magnus Bakke and Jon Yngve Hardeberg and Steffen Paul},
  title = {Simulation of film media in motion picture production using a digital still camera},
  booktitle = {Image Quality and System Performance VI},
  address = {San Jose, CA, USA},
  month = {Jan},
  year = {2009},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {7242}
}
BibTeX:
@misc{Bakke2002,
  author = {Arne M. Bakke and St\r{a}le Kopperud and Anders Rindal},
  title = {Visualisering av 3D fargerom (visualisation of 3D colour spaces)},
  year = {2002},
  note = {Bachelor thesis (BEng Computer Science). Gj{\o}vik University College},
  url = {http://www.colorlab.no/content/download/21982/216269/file/Rindal_Bachelor_thesis.pdf}
}
BibTeX:
@inproceedings{Bakke2009b,
  author = {Arne Magnus Bakke and Jean-Baptiste Thomas and J{\'e}r{\'e}mie Gerhardt},
  title = {Common Assumptions in Color Characterization of Projectors},
  booktitle = {Proceedings from Gj{\o}vik Color Imaging Symposium 2009},
  address = {Gj{\o}vik, Norway},
  month = {Jun},
  year = {2009},
  series = {H{\o}gskolen i Gj{\o}viks rapportserie},
  number = {4},
  pages = {45-53},
  url = {http://brage.bibsys.no/hig/bitstream/URN:NBN:no-bibsys_brage_9313/3/sammensatt_elektronisk.pdf}
}
BibTeX:
@inproceedings{Bakken2013,
  author = {Eskild Bakken and Jon Yngve Hardeberg},
  title = {Architectural color and universal design - ethics vs. aesthetics?
},
  booktitle = {12th Congress of the International Colour Association (AIC)},
  address = {Newcastle, UK},
  month = {July},
  year = {2013}
}
Abstract: We carried out a CRT monitor based psychophysical experiment to investigate the quality of three colour image difference metrics, the CIE DElta Eab equation, the iCAM and the S-CIELAB metrics. Six original images were reproduced through six gamut mapping algorithms for the observer experiment. The result indicates that the colour image difference calculated by each metric does not directly relate to perceived image difference.
BibTeX:
@inproceedings{Bando2005,
  author = {Eriko Bando and Jon Yngve Hardeberg and David Connah},
  title = {Can gamut mapping quality be predicted by colour image difference formulae?},
  booktitle = {Human Vision and Electronic Imaging X},
  address = {San Jose, California},
  month = {Mar},
  year = {2005},
  pages = {180-191},
  note = {ISBN / ISSN: 0-8194-5639-X}
}
Abstract: We carried out a CRT monitor based psychophysical experiment to investigate the quality of three colour image difference metrics, the CIE�E ab equation, the iCAM and the S-CIELAB metrics. Six original images were reproduced through six gamut mapping algorithms for the observer experiment. The result indicates that the colour image difference calculated by each metric does not directly relate to perceived image difference.
BibTeX:
@conference{Bando2004,
  author = {Eriko Bando and Jon Yngve Hardeberg and David Connah and Ivar Farup},
  title = {Predicting visible image degradation by colour image difference formulae},
  booktitle = {The 5th International Conference on Imaging Science and Hardcopy, Volume 25 of Chinese Journal of Scientific Instrument},
  address = {China},
  year = {2004},
  pages = {121-124}
}
BibTeX:
@inproceedings{Beigpour2015,
  author = {Shida Beigpour and Marius Pedersen},
  title = {Color play: Gamifcation for color vision study},
  booktitle = {Mid-term meeting of the International Colour Association (AIC)},
  address = {Tokyo, Japan},
  month = {May},
  year = {2015}
}
Abstract: When the bright light from a camera flash bulb hits the retina in the back of the eye, some of the light reflects and is colored red from the blodwessels. The red light can be seen as a glaring red dot in the face of the subject. This is a very common effect in amateur photography. The aim of this project is to locate and correct this effect, without user interaction on large sets of pictures without any knowledge of the image quality or composition. To locate red eyes one first looks for faces in the image. By looking at the color properties of skin, and methods described in research literature, one can to a certain degree segment out skin regions. Within these presumed skin regions, we search for areas with a high concentration of red, to locate the red eyes in the picture. Much of the difficulties lies in segmentation of red eyes from the rest of the picture. Like skin, red eyes has vaying color properties. When a red eye in located, the pixels in the located area are replaced with achromatic pixels, to give the pupil an as natural as possible look. This project report describes testing and evaluation of methods for detection and correction of red eyes, and combinations of methods to improve the technology. Tests showed that my methods did not produce satisfactory results. Much of the problems lays in edge detection to locate potential red eyes.
BibTeX:
@mastersthesis{Bjerkvik2004,
  author = {{\O}yvind Bjerkvik},
  title = {Automatisk korreksajon av r{\o}de {\o}yne i digitale bilder (Automatic redeye effect correction in digital images)},
  school = {Gj{\o}vik University College},
  year = {2004},
  url = {http://www.colorlab.no/content/download/21939/215662/file/Oyvind_Bjerkvik_Master_thesis.pdf}
}
Abstract: Many methods have been developed in image processing for face recognition, especially in recent years with the increase of biometric technologies. However, most of these techniques are used on grayscale images acquired in the visible range of the electromagnetic spectrum. The aims of our study are to improve existing tools and to develop new methods for face recognition. The techniques used take advantage of the different spectral ranges, the visible, optical infrared and thermal infrared, by either combining them or analyzing them separately in order to extract the most appropriate information for face recognition. We also verify the consistency of several keypoints extraction techniques in the Near Infrared (NIR) and in the Visible Spectrum.
BibTeX:
@inproceedings{Boisier2011,
  author = {Bertrand Boisier and B. Billiot and Z. Abdessalem and Pierre Gouton and Jon Yngve Hardeberg},
  title = {Extraction and fusion of spectral parameters for face recognition},
  booktitle = {Image Processing: Machine Vision Applications IV},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2011},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {7877},
  pages = {787732}
}
Abstract: In this paper we will show that using Principle Component Analysis (PCA) on accelerometer based gait data will give a large improvement on the performance. On a dataset of 720 gait samples (60 volunteers and 12 gait samples per volunteer) we achieved an EER of 1.6% while the best result so far, using the Average Cycle Method (ACM), gave a result of nearly 6%. This tremendous increase makes gait recognition a viable method in commercial applications in the near future.
BibTeX:
@inproceedings{Bours2010,
  author = {Patrick Bours and Raju Shrestha},
  title = {Eigensteps: A giant leap for gait recognition},
  booktitle = {The 2nd International Workshop on Security and Communication Networks (IWSCN)},
  month = {May},
  year = {2010},
  pages = {1--6},
  keywords = {accelerometer based gait data;average cycle method;eigensteps;gait recognition;principle component analysis;accelerometers;computer vision;gait analysis;gesture recognition;principal component analysis;},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5497991},
  doi = {10.1109/IWSCN.2010.5497991}
}
Abstract: Color is an important visual cue and includes many key features in the image content analysis. Most current color descriptors such as the dominant color descriptor, still have limitations to depict how human color perception looks at images. One big issue is that, the local spatial organization of the visual scene, which contains the prominent colors, are not the objects the user intends to find. Much redundancy has been incurred in the image descriptors leading to the great complexity in the similarity measure. Therefore, in this thesis we propose a compact global color descriptor, which is able to extract the distinct features in the visual scenery. The objective falls into two intertwined parts: Firstly, a salient feature detection approach is proposed based on the opponent color space. All the dominant colors extracted by the Particle Swarm Optimization (PSO), are transformed to an iso-salient space to boost their color saliencies. An evaluation test is performed to compare the extracted salient regions with the human gaze map. Secondly, a novel spatial descriptor is used to rescale the local regions according to the importance in the salient feature map. Finally, a combined color descriptor is evaluated in a large database to show its strength.
BibTeX:
@mastersthesis{Cao2010b,
  author = {Guanqun Cao},
  title = {Salient color feature extraction for image retrieval},
  school = {Gj{\o}vik University College},
  year = {2010},
  keywords = {Content-based indexing and retrieval, visual saliency, color imaging, perceptual color descriptor},
  url = {http://www.hig.no/content/download/28583/327828/file/Clarance%20Cao.pdf}
}
BibTeX:
@conference{Cao2010a,
  author = {Guanqun Cao and Faouzi Alaya Cheikh},
  title = {Salient region detection with opponent color boosting},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {Jul},
  year = {2010}
}
Abstract: When an image is reproduced with a device different artifacts can occur. These artifacts, if detectable by observers, will reduce the quality of the image. If these artifacts occur in salient regions (regions of interest) or if the artifacts introduce salient regions they contribute to reduce the quality of the reproduction. In this paper we propose a novel method for the detection of artifacts based on saliency models. The method is evaluated against a set of gamut mapped images containing the most common artifacts, which have been marked by a group of color experts. The results have shown that the proposed metrics are promising to detect the artifacts through the reproduction.
BibTeX:
@inproceedings{Cao2010,
  author = {Guanqun Cao and Marius Pedersen and Zofia Baranczuk},
  title = {Saliency Models as Gamut-Mapping Artifact Detectors},
  booktitle = {5th European Conference on Colour in Graphics, Imaging, and Vision (CGIV)},
  address = {Joensuu, Finland},
  month = {June},
  year = {2010},
  pages = {437--443}
}
BibTeX:
@mastersthesis{Caracciolo2009,
  author = {Valentina Caracciolo},
  title = {Just Noticeable Distortion evaluation in color images},
  school = {Gj{\o}vik University College and Roma Tre University},
  year = {2009},
  url = {http://colorlab.no/content/download/25901/274793/file/Caracciolo2009_Master_Thesis.pdf}
}
BibTeX:
@article{Cheikh2012,
  author = {Faouzi Alaya Cheikh and Sajib Kumar Saha and Victoria Rudakova and Peng Wang},
  title = {Multi-people tracking across multiple cameras},
  journal = {International Journal of New Computer Architectures and their Applications (IJNCAA)},
  year = {2012},
  volume = {2},
  number = {1},
  pages = {22--33}
}
BibTeX:
@mastersthesis{CHEN2013,
  author = {Ailin CHEN},
  title = {Colour visualisation of hyperspectral images in art restoration},
  school = {Gj{\o}vik University College},
  year = {2013}
}
BibTeX:
@inproceedings{Chen2013,
  author = {Ailin Chen and Eric Dinet and Jon Yngve Hardeberg},
  title = {The creation of an artwork with simultaneous contrast},
  booktitle = {12th Congress of the International Colour Association (AIC)},
  address = {Newcastle, UK},
  month = {July},
  year = {2013}
}
BibTeX:
@article{Cheung2005,
  author = {Vien Cheung and Changjun Li and Stephen Westland and Jon Yngve Hardeberg and David Connah},
  title = {Characterization of trichromatic color cameras using a new multispectral imaging technique},
  journal = {Journal of Optical Society of America A},
  year = {2005},
  volume = {22},
  number = {7},
  pages = {1231-1240}
}
Abstract: In this paper we deal with a new Technical Specification providing a method for the objective measurement of print quality characteristics that contribute to perceived printer resolution, “ISO/IEC TS 29112:2012: Information Technology – Office equipment – Test charts and Methods for Measuring Monochrome Printer Resolution”. The Technical Specification aims at electrophotography monochrome printing systems. Since the referred measures should show system or technology independence inkjet printing systems are included as well in our study. In order to verify if given objective methods correlate well with human perception, a psychophysical experiment has been conducted, and the objective methods have been compared against the perceptual data.
BibTeX:
@inproceedings{Cisarova2013,
  author = {Milena Cisarova and Marius Pedersen and Peter Nussbaum and Frans Gaykema},
  title = {Verification of proposed ISO methods to measure resolution capabilities of printing systems},
  booktitle = {Image Quality and System Performance X},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2013},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {8653},
  pages = {86530M},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=1568813}
}
BibTeX:
@inproceedings{Colantoni2009,
  author = {Philip Colantoni and Jean-Baptiste Thomas},
  title = {A color management process for real time color reconstruction of multispectral images},
  booktitle = {16th Scandinavian Conference on Image Analysis},
  address = {Oslo, Norway},
  month = {Jun},
  year = {2009},
  series = {Lecture Notes in Computer Science},
  volume = {5575},
  pages = {128--137},
  url = {http://www.springerlink.com/link.asp?id=105633}
}
Abstract: A new, accurate, and technology-independent display color-characterization model is introduced. It is based on polyharmonic spline interpolation and on an optimized adaptive training data set. The establishment of this model is fully automatic and requires only a few minutes, making it efficient in a practical situation. The experimental results are very good for both the forward and inverse models. Typically, the proposed model yields an average model prediction error of about 1 ?Eab* unit or below for several displays. The maximum error is shown to be low as well.
BibTeX:
@article{Colantoni2011,
  author = {Philippe Colantoni and Jean-Baptiste Thomas and Jon Yngve Hardeberg},
  title = {High-end colorimetric display characterization using an adaptive training set},
  month = {Aug},
  journal = {Journal of the Society for Information Display},
  year = {2011},
  volume = {19},
  number = {8},
  pages = {520--530},
  doi = {10.1889/JSID19.8.520}
}
BibTeX:
@misc{Colourlab2014,
  author = {Colourlab},
  title = {Colourlab Annual Report 2013},
  year = {2014}
}
Abstract: The surface reflectance functions of natural and man-made surfaces are invariably smooth. It is desirable to exploit this smoothness in a multispectral imaging system by using as few sen-sors as possible to capture and reconstruct the data. In this paper we investigate the minimum number of sensors to use, while also minimizing reconstruction error. We do this by deriving different numbers of optimized sensors, constructed by transforming the characteristic vectors of the data, and simulating reflectance recov-ery with these sensors in the presence of noise. We find an upper limit to the number of optimized sensors one should use, above which the noise prevents decreases in error. For a set of Munsell reflectances, captured under educated levels of noise, we find that this limit occurs at approximately nine sensors. We also demon-strate that this level is both noise and dataset dependent, by provid-ing results for different magnitudes of noise and different reflectance datasets.
BibTeX:
@article{Connah2006,
  author = {David Connah and Ali Alsam and Jon Yngve Hardeberg},
  title = {Multispectral Imaging: How Many Sensors Do We Need?},
  month = {Jan/Feb},
  journal = {The Journal of Imaging Science and Technology},
  year = {2006},
  volume = {50},
  number = {1},
  pages = {45-52},
  note = {ISBN / ISSN: 1062-3701}
}
Abstract: The surface reflectance functions of natural and man made surfaces are invariably smooth. It is desirable to exploit this smoothness in a multispectral imaging system by us-ing as few sensors as possible to capture and reconstruct the data. In this paper we investigate the minimum num-ber of sensors to use, whilst also minimising reconstruc-tion error. We do this by deriving different numbers of optimised sensors, constructed by transforming the char-acteristic vectors of the data, and simulating reflectance recovery with these sensors in the presence of noise. We find an upper limit to the number of optimised sensors one should use, above which the noise prevents decreases in error. For a set of Munsell reflectances, captured under educated levels of noise, we find that this limit occurs at approximately 9 sensors.
BibTeX:
@inproceedings{Connah2004,
  author = {David Connah and Ali Alsam and Jon Yngve Hardeberg},
  title = {Multispectral Imaging: How Many Sensors Do We Need?},
  booktitle = {Twelfth Color Imaging Conference: Color Science and Engineering Systems, Technologies, Applications},
  address = {Scottsdale, AZ, USA},
  month = {Nov},
  year = {2004},
  pages = {53-58},
  note = {ISBN / ISSN: 0-89208-254-2}
}
Abstract: In this paper we apply polynomial models to the problem of reflectance recovery for both three-channel and multispectral imaging systems. The results suggest that the technique is superior in terms of accuracy to a standard linear transform and its generalisation performance is equivalent provided that some regularisation is employed. The experiments with the multispectral system suggest that this advantage is reduced when the number of sensors are increased.
BibTeX:
@inproceedings{Connah2005,
  author = {David R. Connah and Jon Yngve Hardeberg},
  title = {Spectral recovery using polynomial models},
  booktitle = {Color Imaging X: Processing, Hardcopy, and Applications},
  address = {San Jose, California},
  month = {Jan},
  year = {2005},
  pages = {65-75},
  note = {ISBN / ISSN: 0-8194-5640-3}
}
Abstract: The appearance of translucent materials is strongly affected by bulk (or sub surface) scattering. For paper and carton board, lateral light propagation and angle-resolved reflection have been studied extensively but treated separately. The present work aims at modelling the BSSRDF of turbid media in order to study the angular variation of the reflectance as function of the lateral propagation within the medium. Monte Carlo simulations of the spatial- and angle resolved reflectance of turbid media are performed for different scattering and absorption coefficients, phase functions and surface topographies representative for several paper grades. The average (or standard) BRDF shows a specular reflectance peak, but it also in-creases with increasing polar angle, from 0? normal to the paper surface to 90? parallel to the surface. The BSSRDF simulations show that the bulk reflection is anisotropic and that the anisotropy decreases with propagation distance. Hence, the angle-resolved reflectance of turbid media is function of the lateral light propagation within the substrate. This may impact on the appearance at different angles and make measurements of the lateral light propagation dependent on the instrument geometry. Since the model used can handle topographical surfaces and ink layers, future work includes to model the BSSRDF of 2.5 prints.
BibTeX:
@inproceedings{Coppel2014a,
  author = {Coppel, L. G.},
  title = {Lateral light propagation and angular variation of the reflectance of paper},
  booktitle = {MMRMA},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2014},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9018},
  pages = {9018-20},
  url = {http://spie.org/EI/conferencedetails/measuring-modeling-reproducing-material-appearance}
}
BibTeX:
@inproceedings{Coppel2014c,
  author = {Coppel, Ludovic Gustafsson},
  title = {Dot gain analysis from probabilistic spectral modelling of colour halftone},
  booktitle = {41st International Research Conference of iarigai},
  address = {Swansea, UK},
  year = {2014},
  pages = {13--17}
}
BibTeX:
@inproceedings{Coppel2014d,
  author = {Coppel, Ludovic Gustafsson},
  title = {Next generation printing - Towards spectral proofing},
  booktitle = {41st International Research Conference of iarigai},
  address = {Swansea, UK},
  year = {2014},
  pages = {19--23}
}
Abstract: The spectral radiance factor and thereby the appearance of
fluorescing material is known to depend strongly on the spectral
power distribution (SPD) of the illumination in the fluorophore’s
excitation wavelength band. The present work
demonstrates the impact of the SPD in the fluorescence emission
band on the total radiance factor. The total radiance factor
of a fluorescing paper is measured in three different illuminations.
The presence of peaks in the SPD of fluorescent
light tubes dramatically decreases the luminescent radiance
factor. This effect will impact the appearance of fluorescing
media under illuminations with large variation in SPD, which
includes recent LED illuminations.
BibTeX:
@conference{Coppel2013,
  author = {Ludovic Gustafsson Coppel and Mattias Andersson and Ole Norberg and Siv Lindberg},
  title = {Impact of illumination spectral power distribution on radiance factor of fluorescing materials},
  booktitle = {Colour and Visual Computing Symposium (CVCS)},
  month = {Sept},
  publisher = {IEEE},
  year = {2013}
}
Abstract: Colour Printing 7.0: Next Generation Multi-Channel Printing (CP7.0) is a training and research project funded under the Marie Skłodowska-Curie Initial Training Networks (MCITN) call in EU's seventh framework programme (FP7). The project is led by Gjøvik University College
in collaboration with five full network partners and six associated partners from academia and industry. The project addresses a significant need for research, training and innovation in the printing industry. The main objectives of this project are to train a new generation of printing scientists
who will be able to assume science and technology leadership in this established technological sector, and to do research in the colour printing field by fully exploring the possibilities of using more than the conventional CMYK inks. The research focuses particularly on spectral reproduction
(new spectral colour modelling, spectral gamut mapping, halftoning and image quality assessment) and on multilayering printing methods to control ink mixing, relief (2.5 D prints) and surface properties. This paper reviews the achievements of the project so far in conjunction with a topical
workshop at the 22nd Colour and Imaging Conference on “Next generation colour printing”.
BibTeX:
@inproceedings{Coppel2014b,
  author = {Coppel, Ludovic Gustafsson and Sole, Aditya and Hardeberg, Jon-Yngve},
  title = {Colour Printing 7.0: Next Generation Multi-Channel Printing},
  booktitle = {The 22nd Color and Imaging Conference (CIC)},
  address = {Boston, MA, USA},
  month = {Nov},
  publisher = {IS\&T},
  year = {2014},
  pages = {37--42},
  url = {http://www.ingentaconnect.com/content/ist/cic/2014/00002014/00002014/art00043}
}
BibTeX:
@mastersthesis{DEBORAH2013,
  author = {Hilda DEBORAH},
  title = {Color Prediction and Pigment Mapping of Cultural Heritage Paintings Based on Hyperspectral Imaging},
  school = {Gj{\o}vik University College},
  year = {2013}
}
BibTeX:
@incollection{Deborah2014,
  author = {Deborah, Hilda and George, Sony and Hardeberg, JonYngve},
  title = {Pigment Mapping of the Scream (1893) Based on Hyperspectral Imaging},
  booktitle = {Image and Signal Processing},
  publisher = {Springer International Publishing},
  year = {2014},
  series = {Lecture Notes in Computer Science},
  volume = {8509},
  pages = {247-256},
  url = {http://dx.doi.org/10.1007/978-3-319-07998-1_28},
  doi = {10.1007/978-3-319-07998-1_28}
}
BibTeX:
@inproceedings{Deborah2014a,
  author = {Deborah, H. and Richard, N. and Hardeberg, J.Y.},
  title = {On the Quality Evaluation of Spectral Image Processing Algorithms},
  booktitle = {Signal-Image Technology and Internet-Based Systems (SITIS), 2014 Tenth International Conference on},
  month = {Nov},
  year = {2014},
  pages = {133-140},
  keywords = {image filtering;spectral analysis;distance-based spectral image processing tools;full reference quality assessment;low-level spectral image processing tools;reduced reference quality assessment;spectral distance;spectral image filtering;spectral image processing algorithms;spectral noise;Image color analysis;Noise;Noise measurement;Protocols;Quality assessment;Spectral image processing;quality assessment;spectral distance;spectral filtering},
  doi = {10.1109/SITIS.2014.50}
}
Abstract: The performance of an image processing algorithm can be assessed through its resulting images. However, in order to do so, both ground truth image and noisy target image with known properties are typically required. In the context of hyperspectral image processing, another constraint is introduced, i.e. apart from its mathematical properties, an artificial signal, noise, or variations should be physically correct. Deciding to work in an intermediate level, between real spectral images and mathematical model of noise, we develop an approach for obtaining suitable spectral impulse signals. The model is followed by construction of target images corrupted by impulse signals and these images will later on be used to evaluate the performance of a filtering algorithm.
BibTeX:
@incollection{Deborah2015,
  author = {Deborah, Hilda and Richard, Noel and Hardeberg, Jon Yngve},
  title = {Spectral Impulse Noise Model for Spectral Image Processing},
  booktitle = {Computational Color Imaging},
  publisher = {Springer International Publishing},
  year = {2015},
  series = {Lecture Notes in Computer Science},
  volume = {9016},
  pages = {171-180},
  keywords = {Hyperspectral image; Image processing; Impulse noise},
  url = {http://dx.doi.org/10.1007/978-3-319-15979-9_17},
  doi = {10.1007/978-3-319-15979-9_17}
}
BibTeX:
@inproceedings{Deborah2015a,
  author = {Hilda Deborah and Noel Richard and Jon Yngve Hardeberg},
  title = {Spectral ordering assessment using spectral median filters},
  booktitle = {International Symposium on Mathematical Morphology},
  address = {Reykjavik, Iceland},
  year = {2015}
}
BibTeX:
@conference{Deger2013,
  author = {Ferdinand Deger and Alamin Mansouri and Phillipe Curdy and Yvon Voisin and Jon Yngve Hardeberg},
  title = {Acquisition and documentation of prehistoric funeral stone stelae},
  booktitle = {Denkmäler3.de},
  month = {Oct.},
  year = {2013},
  pages = {2 pp.}
}
BibTeX:
@incollection{Deger2014,
  author = {Deger, Ferdinand and Mansouri, Alamin and Pedersen, Marius and Hardeberg, JonYngve and Voisin, Yvon},
  title = {A Variational Approach for Denoising Hyperspectral Images Corrupted by Poisson Distributed Noise},
  booktitle = {Image and Signal Processing},
  publisher = {Springer International Publishing},
  year = {2014},
  series = {Lecture Notes in Computer Science},
  volume = {8509},
  pages = {106-114},
  url = {http://dx.doi.org/10.1007/978-3-319-07998-1_13},
  doi = {10.1007/978-3-319-07998-1_13}
}
Abstract: Many denoising approaches extend image processing to a hyperspectral cube structure, but do not take into account a sensor model nor the format of the recording. We propose a denoising framework for hyperspectral images that uses sensor data to convert an acquisition to a representation facilitating the noise-estimation, namely the photon-corrected image. This photon corrected image format accounts for the most common noise contributions and is spatially proportional to spectral radiance values. The subsequent denoising is based on an extended variational denoising model, which is suited for a Poisson distributed noise. A spatially and spectrally adaptive total variation regularisation term accounts the structural proposition of a hyperspectral image cube. We evaluate the approach on a synthetic dataset that guarantees a noise-free ground truth, and the best results are achieved when the dark current is taken into account.
BibTeX:
@article{Ferdin2015,
  author = {Ferdinand Deger and Alamin Mansouri and Marius Pedersen and Jon Y. Hardeberg and Yvon Voisin},
  title = {A sensor-data-based denoising framework for hyperspectral images},
  journal = {Optics Express},
  publisher = {Optical Society of America (OSA)},
  year = {2015},
  volume = {23},
  number = {3},
  pages = {1938-1950},
  url = {http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-23-3-1938},
  doi = {10.1364/OE.23.001938}
}
BibTeX:
@inproceedings{Deger2012,
  author = {Ferdinand Deger and Alamin Mansouri and Marius Pedersen and Jon Y. Hardeberg and Yvon Voisin},
  title = {Multi- and single-output Support Vector Regression for Spectral Reflectance Recovery},
  booktitle = {Eighth International Conference on Signal Image Technology and Internet Based Systems},
  address = {Sorrento, Naples, Italy},
  month = {Nov},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {805--810}
}
Abstract: Implementations of Example-Based Super-Resolution (EBSR)
have been developed extensively. Any such EB-SR
method is typically evaluated against a constructed test set
as to define its performance and applicability. Nevertheless,
it is rare for a formed test set to precisely resemble data met
in a real-world problem. Usually, low-quality training and
test subsets are obtained directly from their corresponding
high-quality ground truth data. This allows for a complete
and reliable quantitative examination of performance at a
later stage. In a real-world problem however, test data are obtained
from another source, as for example, printed images.
Naturally, low-quality scanned halftones and high-quality
continuous tone images would possibly be spatially incoherent
training pairs. Such circumstances give rise to one major
consideration, misalignment in training subsets. The present
work demonstrates the significance of effect of misalignment
among training subsets in applying EB-SR and supports the
necessity of image registration in preprocessing to overcome
this problem.
BibTeX:
@conference{Demetriou2013,
  author = {Maria Lena Demetriou and Jon Hardeberg and Gabriel Adelmann},
  title = {Learning super-resolution from misaligned examples},
  booktitle = {Colour and Visual Computing Symposium (CVCS)},
  month = {Sept},
  publisher = {IEEE},
  year = {2013}
}
BibTeX:
@inproceedings{Demetriou2012,
  author = {Maria Lena Demetriou and Jon Yngve Hardeberg and Gabriel Adelmann},
  title = {Computer-Aided Reclamation of Lost Art},
  booktitle = {12th European Conference on Computer Vision (ECCV)},
  month = {October},
  year = {2012}
}
BibTeX:
@mastersthesis{Demetriou2012a,
  author = {Maria-Lena Demetriou},
  title = {Computer-Aided Reclamation of Lost Art},
  school = {Gj{\o}vik University College},
  year = {2012}
}
BibTeX:
@inproceedings{Derhak2015,
  author = {Maxim W. Derhak and Phil Green and Tom Lianza},
  title = {Introducing iccMAX: new frontiers in color management},
  booktitle = {Color Imaging XX: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9395},
  pages = {9395-20},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2109903}
}
BibTeX:
@article{Deshpande,
  author = {Deshpande, Kiran and Green, Phil and Pointer, Michael},
  title = {Metrics for comparing and analyzing two colour gamuts},
  journal = {Color Research and Application}
}
BibTeX:
@article{Deshpande2014b,
  author = {Deshpande, Kiran and Green, Phil and Pointer, Michael.},
  title = {Characterisation of the n-colour printing process using the spot colour overprint model},
  journal = {Optics Express},
  publisher = {Optical Society of America (OSA)},
  year = {2014},
  volume = {22},
  number = {26},
  pages = {31786--31800},
  url = {http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-22-8-9123},
  doi = {10.1364/OE.22.009123}
}
BibTeX:
@article{Deshpande2014,
  author = {Deshpande, Kiran and Green, Phil and Pointer, Michael R},
  title = {Gamut evaluation of an n-colour printing process with the minimum number of measurements},
  journal = {Color Research \& Application},
  year = {2014},
  pages = {n/a--n/a},
  keywords = {colour gamuts, n-colour printing, spectral printer models, colour reproduction, printing},
  url = {http://dx.doi.org/10.1002/col.21909},
  doi = {10.1002/col.21909}
}
Abstract: Although the n-colour printing process increases the colour gamut, it presents a challenge in generating colour separations. This paper evaluates different methods of implementing the inverse printer model to obtain the colour separation for n-colour printing processes. The constrained
optimisation and the look-up table based inversion methods were evaluated. The colorant space was divided into sectors of 4-inks and the inverse printer models were applied to each sector.

The results were found to be adequate with the mean CIEDE2000 values between the original colours
and the model predicted colours below 1.5 for most of the models. The lookup table based inversion was computationally faster than the constrained optimisation approach. The 9-level lookup table model gave accurate prediction without costing the processing time. It can be used to replace spot
coloured inks with the 7-colour printing process in packaging printing to achieve significant cost savings.",
BibTeX:
@inproceedings{Deshpande2014a,
  author = {Kiran Deshpande and Phil Green and Michael R. Pointer},
  title = {Colour Separation of N-colour Printing Process Using Inverse Printer Models},
  booktitle = {The 22nd Color and Imaging Conference (CIC)},
  address = {Boston, MA, USA},
  month = {Nov},
  publisher = {IS\&T},
  year = {2014},
  pages = {194--199},
  url = {http://www.ingentaconnect.com/content/ist/cic/2014/00002014/00002014/art00034}
}
BibTeX:
@mastersthesis{Dugay2007,
  author = {Fabienne Dugay},
  title = {Perceptual evaluation of colour gamut mapping algorithms},
  school = {Gj{\o}vik University College and Grenoble Institute of Technology},
  year = {2007},
  url = {http://www.colorlab.no/content/download/21934/215647/file/Fabienne_Dugay_Master_thesis.pdf}
}
BibTeX:
@article{Dugay2007b,
  author = {Fabienne Dugay and Ivar Farup and Jon Yngve Hardeberg},
  title = {Perceptual Evaluation of Color Gamut Mapping Algorithms},
  month = {Dec},
  journal = {Color Research \& Application},
  year = {2008},
  volume = {33},
  number = {6},
  pages = {470-476}
}
Abstract: Understanding how observers perceive image quality is complex, largely because of the subjectivity but also from the dimensionality of the task. A considerable amount of work has been done on ?nding a way to assess perceived quality of a printed document without observers. These methods usually focus on one quality attribute at a time. However, when an observer views a document, he most likely judges the quality with consideration to all attributes simultaneously, or a subset of relative attributes depending on the document type and the differences between the reproductions. With a focus on different printing options, this study has investigated the difference in perceived quality between printing options and how the most relevant quality attributes may relate to one another. A set of observers have been asked to compare the quality of reproductions that have been printed with different work?ows. The principle components have been determined and used to understand the relationships that may exist between the different quality attributes. A set of image characteristics have also been selected to help summarize the amount of each attribute a given document may have.
BibTeX:
@inproceedings{Falkenstern2011a,
  author = {K. Falkenstern and N. Bonnier and H. Brettel and M. Pedersen and F. Vienot},
  title = {Weighing Quality Attributes},
  booktitle = {21st symposium of the international colour vision society ({ICVS})},
  address = {Kongsberg, Norway},
  month = {Jul},
  year = {2011},
  pages = {88},
  note = {ISBN 978-82-8261-009-4}
}
BibTeX:
@inproceedings{Falkenstern2010,
  author = {Kristyn Falkenstern and Nicolas Bonnier and Hans Brettel and Marius Pedersen and Francoise Vienot},
  title = {Using Image Quality Metrics to Evaluate an ICC Printer Profile},
  booktitle = {Color and Imaging Conference},
  address = {San Antonio, TX},
  month = {Nov},
  publisher = {IS\&T and SID},
  year = {2010},
  pages = {244--249},
  url = {http://colorlab.no/content/download/30330/362122/file/Falkenstern2010_poster.pdf}
}
Abstract: Increased interest in color management has resulted in more options for the user to choose between for their color management needs. We propose an evaluation process that uses metrics to assess the quality of ICC profiles, specifically for the perceptual rendering intent. The primary objective of the perceptual rendering intent, unlike the media-relative intent, is a preferred reproduction rather than an exact match. Profile vendors commonly quote a CIE E*ab color difference to define the quality of a profile. With the perceptual rendering intent, this may or may not correlate to the preferred reproduction. For this work we compiled a comprehensive list of quality aspects, used to evaluate the perceptual rendering intent of an ICC printer profile. The aspects are used as tools to individually judge the different qualities that define the overall strength of profiles. The proposed workflow uses metrics to assess each aspect and delivers a relative comparison between different printer profile options. The aim of the research is to improve the current methods used to evaluate a printer profile, while reducing the amount of time required.
BibTeX:
@inproceedings{Falkenstern2011,
  author = {Kristyn Falkenstern and Nicolas Bonnier and Marius Pedersen and Hans Brettel and Francoise Vienot},
  title = {Using Metrics to Assess the ICC Perceptual Rendering Intent},
  booktitle = {Image Quality and System Performance},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2011},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging}
}
Abstract: It is well established from both colour difference and colour order perpectives that the colour space cannot be Euclidean. In spite of this, most colour spaces still in use today are Euclidean, and the best Euclidean colour metrics are performing comparably to state-of-the-art non-Euclidean metrics. In this paper, it is shown that a transformation from Euclidean to hyperbolic geometry (i.e., constant negative curvature) for the chromatic plane can significantly improve the performance of Euclidean colour metrics to the point where they are statistically significantly better than state-of-the-art non-Euclidean metrics on standard data sets. The resulting hyperbolic geometry nicely models both qualitatively and quantitatively the hue super-importance phenomenon observed in colour order systems.
BibTeX:
@article{Farup2014,
  author = {Ivar Farup},
  title = {Hyperbolic geometry for colour metrics},
  month = {May},
  journal = {Opt. Express},
  publisher = {OSA},
  year = {2014},
  volume = {22},
  number = {10},
  pages = {12369--12378},
  keywords = {Vision, color, and visual optics ; Color; Color, measurement ; Color vision; Colorimetry},
  url = {http://www.opticsexpress.org/abstract.cfm?URI=oe-22-10-12369},
  doi = {10.1364/OE.22.012369}
}
BibTeX:
@article{Farup2007,
  author = {Ivar Farup and Carlo Gatta and Alessandro Rizzi},
  title = {A Multiscale Framework for Spatial Gamut Mapping},
  journal = {IEEE Transactions on Image Processing},
  year = {2007},
  volume = {16},
  number = {10},
  pages = {2423-2435}
}
BibTeX:
@techreport{Farup2004b,
  author = {Ivar Farup and Jon Yngve Hardeberg},
  title = {Colour calibration of an electronic camera system for object recognition},
  year = {2004},
  number = {2}
}
Abstract: Gamut mapping is an important issue in cross-media publishing. Although much research and development has been performed, consensus on a single gamut mapping algorithm working for a broad range of images and devices has not yet been reached. The recent tendency in the literature suggests that image-dependent gamut mapping work best. To avoid the computational overhead associated with image-dependent gamut mapping algorithms, one solution is to use different rendering intents (absolute or relative colorimetric, perceptual, or saturation) for traditional device gamut based algorithms. The optimal solution, however, still turns out to be image dependent, leaving craftsmanship as the only real alternative. Unfortunately, no intuitive tools – neither software nor hardware – exists for this work, so one is left with trial-and-error based methods with no direct intuitive coupling between parameters adjusted and color corrections obtained. Hence, a software tool for interactive color gamut mapping in a device independent color space such as CIELAB is needed. Such a tool has been developed by the authors. The application allows for interactive manipulation of colors in the 3D color spaces CIELAB, CIEXYZ, and sRGB. Image and device gamuts can be visualized in various ways in the same figure. The view can be changed interactively, and points representing individual pixel colors, groups of pixels, or the image gamut boundary can be moved in color space using a pointing device. Already at the present stage, the application has become a useful tool for understanding mechanisms associated with color image reproduction, as well as for actually performing interactive image-dependent color gamut mapping.
BibTeX:
@conference{Farup2002a,
  author = {Ivar Farup and Jon Yngve Hardeberg},
  title = {Interactive color gamut mapping},
  booktitle = {The 11th International Printing and Graphics Arts Conference},
  address = {Bordeaux, France},
  month = {Oct},
  year = {2002}
}
Abstract: The SGCK gamut mapping algorithm suggested by CIE TC8-03 has been enhanced by introducing a two-step procedure. Firstly, SGCK is used for gamut mapping the image onto a convex hull representation of the reproduction gamut. The resulting image is then further mapped onto a more realistic representation of the reproduction gamut using hue-angle preserving minimum Deab clipping. Panel testing with fifteen test persons, six different test images, and two different printers shows that this technique gives significantly better results than SGCK.
BibTeX:
@inproceedings{Farup2004a,
  author = {Ivar Farup and Jon Yngve Hardeberg and Morten Amsrud},
  title = {Enhancing the {SGCK} Colour Gamut Mapping Algorithm},
  booktitle = {CGIV 2004 -- Second European Conference on Color in Graphics, Imaging and Vision},
  address = {Aachen, Germany},
  month = {Apr},
  year = {2004},
  pages = {520-524},
  note = {ISBN / ISSN: 0-89208-250-X}
}
Abstract: Several tools and techniques for the visualization of color gamuts have been presented in the past.We present a short survey on the topic,and conclude that tools with the possibility for interactive color adjustment in some color space are almost absent.Therefore,a new tool which combines the known techniques with the possibility of interactive gamut mapping is presented along with suggestions for future work.The motivation for developing the new tool is threefold:First,it will serve as an important pedagogical tool in the teaching of color engineering.Secondly,we believe that the tool will prove helpful in research related to color reproduction.Finally,we hope that the tool can be used in the production of high quality color images in the future.
BibTeX:
@inproceedings{Farup2002,
  author = {Ivar Farup and Jon Yngve Hardeberg and Arne Magnus Bakke and St\r{a}le Kopperud and Anders Rindal},
  title = {Visualization and Interactive Manipulation of Color Gamuts},
  booktitle = {Tenth Color Imaging Conference: Color Science and Engineering Systems, Technologies, Applications},
  address = {Scottsdale, Arizona, USA},
  month = {Nov},
  year = {2002},
  pages = {250-255},
  note = {ISBN / ISSN: 0-89208-241-0}
}
Abstract: The spectral integrator at the University of Oslo consists of a lamp whose light is dispersed into a spectrum by means of a prism. Using a transmissive LCD panel controlled by a computer, certain fractions of the light in different parts of the spectrum is masked out. The remaining spectrum is integrated and the resulting colored light projected onto a dispersing plate. Attached to the computer is also a spectroradiometer measuring the projected light, thus making the spectral integrator a closed-loop system. One main challenge is the generation of stimuli of arbitrary spectral power distributions. We have solved this by means of a computational calibration routine: Vertical lines of pixels within the spectral window of the LCD panel are opened successively and the resulting spectral power distribution on the dispersing plate is measured. A similar procedure for the horizontal lines gives, under certain assumptions, the contribution from each opened pixel. Hereby, light of any spectral power distribution can be generated by means of a fast iterative heuristic search algorithm. The apparatus is convenient for research within the elds of color vision, color appearance modelling, multispectral color imaging, and spectral characterization of devices ranging from digital cameras to solar cell panels.
BibTeX:
@inproceedings{Farup2004,
  author = {Ivar Farup and Thorstein Seim and Jan Henrik Wold and Jon Yngve Hardeberg},
  title = {Generating stimuli of arbitrary spectral power distributions for vision and imaging research},
  booktitle = {Human Vision and Electronic Imaging IX},
  address = {San Jose, California},
  year = {2004},
  pages = {69-79},
  note = {ISBN / ISSN: 0-8194-5195-9}
}
Abstract: A particular version of a spectral integrator has been designed. It consists of a xenon lamp whose light is dispersed into a color spectrum by dispersing prisms. Using a transmissive LCD panel controlled by a computer, certain fractions of the light in different parts of the spectrum are masked out. The remaining transmitted light is integrated and projected onto a translucent diffusing plate. A spectroradiometer that measures the generated light is also attached to the computer, thus making the spectral integrator a closed-loop system. An algorithm for generating the light of a specified spectral power distribution has
been developed. The resulting measured spectra differ from the specified ones with relative rms errors in the range of 1%–20% depending on the shape of the spectral power distribution.
BibTeX:
@article{Farup2007a,
  author = {Ivar Farup and Jan Henrik Wold and Thorstein Seim and Torkjel S{\o}ndrol},
  title = {Generating lights with a specified spectral power distribution},
  journal = {Applied Optics},
  year = {2007},
  volume = {46},
  number = {13},
  pages = {2414-2422}
}
BibTeX:
@mastersthesis{Feier2012,
  author = {Alexandra Ioana Oncu Feier},
  title = {Digital Inpainting for Artwork Restoration: Algorithms and Evaluation},
  school = {Gj{\o}vik University College},
  year = {2012}
}
BibTeX:
@inproceedings{Finlayson2005,
  author = {Graham Finlayson and Ali Alsam},
  title = {Optimal Reduction of Calibration Charts by Integer Programming},
  booktitle = {Proceedings of the 10th Congress of the International Colour Association},
  address = {Granada, Spain},
  year = {2005},
  pages = {1215-1218},
  note = {ISBN 84-609-5164-2}
}
BibTeX:
@article{Finlayson2006,
  author = {Graham Finlayson and S. D. Hordley and Ali Alsam},
  title = {Investigating von Kries-like adaptation using local linear models},
  journal = {Color Research \& Application},
  year = {2006},
  volume = {31},
  number = {2},
  pages = {90-101}
}
BibTeX:
@mastersthesis{Gaddam2012,
  author = {Vamsidhar Reddy Gaddam},
  title = {Real time estimation of dense depth maps from difficult images},
  school = {Gj{\o}vik University College},
  year = {2012}
}
Abstract: The use of standard color reference targets at image acquisition allows to compensate for different camera characteristics, illumination conditions and exposure times, ensuring true colors in digital photo workflows. Reliable automatic detection of reference targets makes color correction
faster, and this becomes critical in mass digitization processes. The existing automatic algorithms usually assume that there is little perspective distortion and/or that the scanning resolution is known, achieving very limited results for example when the relative size of the color target
is unknown. In this paper we present a preprocessing step that aims at automatically detecting a region of interest (ROI) where the reference target is located. We compare the performance of one of the available automatic tools (CCFind) with and without this preprocessing step, and show a
considerable improvement in the detection of color reference targets in a new challenging dataset. In addition, a simple template matching approach is compared with the performance of CCFind. The results show that the selection of a smaller ROI complements well with the existing approaches
and helps to improve detection.
BibTeX:
@inproceedings{Capel2014,
  author = {Garcia Capel, Luis E. and Hardeberg, Jon Y.},
  title = {Automatic Color Reference Target Detection},
  booktitle = {The 22nd Color and Imaging Conference (CIC)},
  address = {Boston, MA, USA},
  month = {Nov},
  publisher = {IS\&T},
  year = {2014},
  pages = {119--124},
  url = {http://www.ingentaconnect.com/content/ist/cic/2014/00002014/00002014/art00020}
}
Abstract: In this study we determined the chromatic difference introduced by the optics of two different microscopes: Olympus SZX10® and Nikon ECLIPSE MA200®, by carefully measuring the 24 different colours of the GretagMacbeth ColorChecker® with a spectroradiometer through the observation eyepiece of the microscopes and computing the chromatic and colour differences with the measured values of the patches without the microscope. The results obtained for the Olympus SZX10® microscope show a mean chromatic difference of 6.52, 4.45, 5.56, 3.52, 3.85, 4.22, and 4.48 units; and a mean colour difference of 7.56, 5.60, 9.55, 4.99, 4.98, 5.48, and 5.69 units for CIELAB, CMC, BFD, CIE94, CIEDE2000, DIN99d and DIN99b, respectively. On the other hand the results obtained for the Nikon ECLIPSE MA200® microscope show a mean chromatic difference of 10.34, 6.48, 8.30, 5.28, 6.15, 3.72, and 6.86 units; and a mean colour difference of 13.31, 9.45, 17.19, 9.22, 9.04, 7.87, and 10.24 units for CIELAB, CMC, BFD, CIE94, CIEDE2000, DIN99d and DIN99b, respectively.
BibTeX:
@inproceedings{Garcia2013,
  author = {Juan Martinez Garcia and Roshanak Zakizadeh and Kiran B. Raja and Christos Siakides},
  title = {Chromatic differences introduced by microscope optics},
  booktitle = {12th Congress of the International Colour Association (AIC)},
  address = {Newcastle, UK},
  month = {July},
  year = {2013}
}
BibTeX:
@inproceedings{George2015,
  author = {Sony George and Irina Mihaela Ciortan and Jon Yngve Hardeberg},
  title = {Evaluation of hyperspectral imaging systems for cultural heritage applications based on a round robin test},
  booktitle = {Mid-term meeting of the International Colour Association (AIC)},
  address = {Tokyo, Japan},
  month = {May},
  year = {2015}
}
BibTeX:
@incollection{George2014,
  author = {George, Sony and Grecicosei, AnaMaria and Waaler, Erik and Hardeberg, JonYngve},
  title = {Spectral Image Analysis and Visualisation of the Khirbet Qeiyafa Ostracon},
  booktitle = {Image and Signal Processing},
  publisher = {Springer International Publishing},
  year = {2014},
  series = {Lecture Notes in Computer Science},
  volume = {8509},
  pages = {272-279},
  keywords = {multispectral imaging; cultural heritage imaging; principal component analysis; independent component analysis; proto-Canaanite; paleo-Hebrew},
  url = {http://dx.doi.org/10.1007/978-3-319-07998-1_31},
  doi = {10.1007/978-3-319-07998-1_31}
}
BibTeX:
@inproceedings{George2009,
  author = {Sony George and Jon Yngve Hardeberg and Tomson G George},
  title = {A fully Automatic Redeye Correction Algorithm with Multilevel Eye Confirmation},
  booktitle = {Proceedings from Gj{\o}vik Color Imaging Symposium 2009},
  address = {Gj{\o}vik, Norway},
  month = {Jun},
  year = {2009},
  series = {H{\o}gskolen i Gj{\o}viks rapportserie},
  number = {4},
  pages = {82-89},
  url = {http://brage.bibsys.no/hig/bitstream/URN:NBN:no-bibsys_brage_9313/3/sammensatt_elektronisk.pdf}
}
BibTeX:
@article{George2010,
  author = {Sony George and Jon Y. Hardeberg and Tomson G. George and V. P. N. Nampoori},
  title = {Automatic Redeye Correction Algorithm with Multilevel Eye Confirmation},
  journal = {Journal of Imaging Science and Technology},
  publisher = {IST},
  year = {2010},
  volume = {54},
  number = {3},
  pages = {030404},
  keywords = {eye; image colour analysis; image sensors; photography; reflection},
  url = {http://link.aip.org/link/?IST/54/030404/1},
  doi = {10.2352/J.ImagingSci.Technol.2010.54.3.030404}
}
BibTeX:
@inproceedings{Gerhardt2007a,
  author = {Jeremie Gerhardt},
  title = {Spectral Color Reproduction versus color reproduction},
  booktitle = {Gj{\o}vik Color Imaging Symposium},
  year = {2007}
}
BibTeX:
@phdthesis{Gerhardt2007d,
  author = {Jeremie Gerhardt},
  title = {Spectral Color Reproduction: Model Based and Vector Error Diffusion Approaches},
  school = {Ecole Nationale Superieure des Telecommunications and Gj{\o}vik University College},
  year = {2007}
}
BibTeX:
@inproceedings{Gerhardt2007b,
  author = {Jeremie Gerhardt and Jon Yngve Hardeberg},
  title = {Controlling the error in spectral vector error diffusion},
  booktitle = {Color Imaging XII: Processing, Hardcopy, and Applications},
  address = {San Jose, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2007},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {6493},
  pages = {649316}
}
BibTeX:
@inproceedings{Gerhardt2009,
  author = {Jeremie Gerhardt and Jon Yngve Hardeberg},
  title = {Simple Comparison of Spectral Color Reproduction Workflows},
  booktitle = {16th Scandinavian Conference on Image Analysis},
  address = {Oslo, Norway},
  month = {Jun},
  year = {2009},
  series = {Lecture Notes in Computer Science},
  volume = {5575},
  pages = {550--559},
  url = {http://www.springerlink.com/link.asp?id=105633}
}
BibTeX:
@inproceedings{Gerhardt2007,
  author = {Jeremie Gerhardt and Jon Yngve Hardeberg},
  title = {Spectral color reproduction},
  booktitle = {34th International Research Conference of iarigai},
  address = {Grenoble},
  month = {Sep},
  year = {2007}
}
BibTeX:
@inproceedings{Gerhardt2007c,
  author = {Jeremie Gerhardt and Jon Yngve Hardeberg},
  title = {Spectral color reproduction versus color reproduction},
  booktitle = {Advances in Printing and Media Technology},
  address = {Zagreb, Croatia},
  publisher = {Acta Graphica Publishers},
  year = {2007},
  volume = {34},
  pages = {147-152},
  note = {ISBN 978-953-7292-04-1}
}
Abstract: This paper demonstrates the feasibility of vector error diffusion for spectral colour reproduction using a multi-channel printing device. Using a simplified spectral printer model we demonstrate that spectral vector error diffusion is able to produce a good spectral match, implicitly solves the problem of printer model inversion and achieves reduced visual noise (stochastic moire) compared to when using standard channel independent scalar error diffusion.
BibTeX:
@inproceedings{Gerhardt2006,
  author = {Jeremie Gerhardt and Jon Yngve Hardeberg},
  title = {Spectral Colour Reproduction by Vector Error Diffusion},
  booktitle = {CGIV 2006 -- Third European Conference on Color in Graphics, Imaging and Vision},
  address = {Leeds, UK},
  year = {2006},
  pages = {469-473},
  note = {ISBN / ISSN: 0-89208-262-3}
}
BibTeX:
@inproceedings{Gerhardt2005,
  author = {Jeremie Gerhardt and Jon Yngve Harderberg},
  title = {Spectral vector error diffusion},
  booktitle = {Second Gj{\o}vik Color Symposium},
  year = {2005}
}
BibTeX:
@inproceedings{Gerhardt2010,
  author = {Jeremie Gerhardt and Jean-Baptiste Thomas},
  title = {Toward an automatic color calibration for 3D displays},
  booktitle = {Color and Imaging Conference},
  address = {San Antonio, TX},
  month = {Nov},
  publisher = {IS\&T and SID},
  year = {2010},
  pages = {5--10}
}
BibTeX:
@article{Gerhardt2008,
  author = {J{\'e}r{\'e}mie Gerhardt and Jon Yngve Hardeberg},
  title = {Spectral color reproduction minimizing spectral and perceptual color differences},
  month = {Dec},
  journal = {Color Research \& Application},
  year = {2008},
  volume = {33},
  number = {6},
  pages = {494-504}
}
Abstract: In this paper we present the design of an image Content-Based Indexing and Retrieval (CBIR) system which, based upon existing implementations of a number of well-known color descriptors, makes use of the bag-of-words or codebook model in order to construct a robust approach to the retrieval of images from a database in a query-by-example context. A new object image database was constructed specifically for this task, in an attempt to challenge the invariance properties of the system under controlled conditions of illumination, point of view and scale. The system permits the combined use of up to two of the different color descriptors considered. The experiments run over a subset of the image database show an improvement of the obtained results under some of the tested combinations, as well as the effect of the variation of the employed codebook size.
BibTeX:
@inproceedings{Gila2010,
  author = {Aitor Alvarez Gila and Guanqun Cao and Sheikh Faridul Hasan and Yu Hu},
  title = {Combining Color Descriptors for Improved Codebook Model-Based Image Retrieval},
  booktitle = {5th European Conference on Colour in Graphics, Imaging, and Vision (CGIV)},
  address = {Joensuu, Finland},
  month = {June},
  year = {2010},
  pages = {306--313}
}
Abstract: Many objective image quality assessment algorithms firstly apply quality metrics in local regions that results in a quality map, and then pool the quality values in the quality map into a single quality score. The simplest pooling method is the average of quality values, which assumes that all the quality values are independent and equally important. However, visual perception is so complex that the assumption underlying average pooling might be too strict. There is an agreement that some regions in the images might be more perceptually significant, which leads to more advanced spatial pooling methods. In this work we evaluate existing spatial pooling methods for five important quality attributes, which are proposed to reduce the complexity of image quality assessment. The results show that: (1) more advanced spatial pooling methods are generally better than simple average; (2) spatial pooling depends on both image quality metrics and the attributes of the image.
BibTeX:
@article{Gonga2012,
  author = {Mingming Gong and Marius Pedersen},
  title = {Spatial pooling for measuring color printing quality attributes},
  month = {July},
  journal = {Journal of Visual Communication and Image Representation},
  year = {2012},
  volume = {23},
  number = {5},
  pages = {685–696},
  url = {http://www.sciencedirect.com/science/article/pii/S1047320312000600}
}
BibTeX:
@inproceedings{Gouton2005,
  author = {Pierre Gouton and Loic Peigne and Gabrielle Menu and Jon Yngve Hardeberg},
  title = {Using a Standard Colour Camera to Correct Spatial Colorimetric Variation in Videoprojector Display},
  booktitle = {Proceedings of 7th International Conference on Quality Control by Artificial Vision (QCAV2005)},
  address = {Nagoya, Japan},
  month = {May},
  year = {2005},
  pages = {313-318}
}
BibTeX:
@inproceedings{Green2015,
  author = {Phil Green},
  title = {Baseline gamut mapping method for the perceptual reference medium gamut},
  booktitle = {Color Imaging XX: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9395},
  pages = {9395-22},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2109904}
}
BibTeX:
@inproceedings{Green2015a,
  author = {Phil Green},
  title = {False-colour palette generation using a reference colour gamut},
  booktitle = {Color Imaging XX: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9395},
  pages = {9395-23},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2109905}
}
BibTeX:
@inproceedings{Green2015b,
  author = {Phil Green},
  title = {Why simulations of colour for CVD observers might not be what they seem},
  booktitle = {Color Imaging XX: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9395},
  pages = {9395-38},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2109916}
}
Abstract: A requirement for a baseline algorithm for mapping from
the ICC Perceptual Reference Medium Gamut to destination
media in ICC output profiles has been identified. Before
such a baseline algorithm can be recommended, it requires
careful evaluation by the user community. A framework for
encoding the gamut boundary and computing intersections
with the PRMG and output gamuts respectively is described.
This framework provides a basis for comparing different
gamut mapping algorithms, and a number of candidate
algorithms are also described.
BibTeX:
@conference{Phil2013,
  author = {Phil Green},
  title = {Gamut mapping for the perceptual reference gamut},
  booktitle = {Colour and Visual Computing Symposium (CVCS)},
  month = {Sept},
  publisher = {IEEE},
  year = {2013}
}
Abstract: Abstract In this paper we propose a novel Computational Attention Models (CAM) that fuses bottom-up, top-down and salient motion visual cues to compute visual salience in surveillance videos. When dealing with a number of visual features/cues in a system, it is always challenging to combine or fuse them. As there is no commonly agreed natural way of combining different conspicuity maps obtained from different features: face and motion for example, the challenge is thus to find the right mix of visual cues to get a salience map that is the closest to a corresponding gaze map? In the literature many CAMs have used fixed weights for combining different visual cues. This is computationally attractive but is a very crude way of combining the different cues. Furthermore, the weights are typically set in an ad hoc fashion. Therefore in this paper, we propose a machine learning approach, using an Artificial Neural Network (ANN) to estimate these weights. The ANN is trained using gaze maps, obtained by eye tracking in psycho-physical experiments. These weights are then used to combine the conspicuities of the different visual cues in our CAM, which is later applied to surveillance videos. The proposed model is designed in a way to consider important visual cues typically present in surveillance videos, and to combine their conspicuities via ANN. The obtained results are encouraging and show a clear improvement over state-of-the-art CAMs.
BibTeX:
@article{Guraya2015,
  author = {Fahad Fazal Elahi Guraya and Faouzi Alaya Cheikh},
  title = {Neural networks based visual attention model for surveillance videos },
  journal = {Neurocomputing},
  year = {2015},
  volume = {149, Part C},
  number = {0},
  pages = {1348--1359},
  keywords = {Visual salience},
  url = {http://www.sciencedirect.com/science/article/pii/S0925231214011217},
  doi = {http://dx.doi.org/10.1016/j.neucom.2014.08.062}
}
BibTeX:
@inproceedings{Guraya2012,
  author = {Fahad Fazal Elahi Guraya and Victor Medina and Faouzi Alaya Cheikh},
  title = {Visual attention based surveillance videos compression},
  booktitle = {Color and Imaging Conference},
  address = {Los Angeles, CA, USA},
  month = {Nov},
  publisher = {IS\&T and SID},
  year = {2012},
  pages = {1--8}
}
BibTeX:
@conference{Hachicha2013,
  author = {Walid Hachicha and Azeddine Beghdadi and Faouzi Alaya Cheikh},
  title = {Stereo image quality assessment using a binocular just noticeable difference model},
  booktitle = {International Conference on Image Processing (ICIP)},
  address = {Melbourne, Australia},
  publisher = {IEEE},
  year = {2013}
}
BibTeX:
@inproceedings{Hachicha2013a,
  author = {Walid Hachicha and Azeddine Beghdadi and Faouzi Alaya Cheikh},
  title = {1D Directional DCT-based Stereo Residual Compression},
  booktitle = {Proceedings of the European Signal Processing Conference (EUSIPCO)},
  address = {Marrakech, Morocco},
  year = {2013},
  url = {http://www.eurasip.org/Proceedings/Eusipco/Eusipco2010/Contents/proceedings.html}
}
BibTeX:
@conference{Hachicha2014,
  author = {W. Hachicha and M. Kaaniche and A. Beghdadi and F. Alaya Cheikh},
  title = {OPTIMIZED RESIDUAL IMAGE FOR STEREO IMAGE CODING},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {Dec},
  year = {2014}
}
BibTeX:
@conference{Hardeberg2011,
  author = {Jon Y. Hardeberg},
  title = {Recent progress in quantifying colour reproduction quality},
  booktitle = {3rd European Workshop on Visual Information Processing (EUVIP 2011)},
  address = {Paris, France},
  month = {July},
  year = {2011}
}
BibTeX:
@misc{Hardeberg2011a,
  author = {Jon Y. Hardeberg},
  title = {Towards one shot multispectral colour image acquisition},
  month = {July},
  year = {2011},
  note = {CIE Session 2011, Sun City, South Africa}
}
Abstract: Quantifying the perceptual difference between original and reproduced (and inevitably modified) color images is currently a key research challenge in the field of color imaging. Such information can be extremely valuable for instance in the development of new equipment and algorithms for color reproduction.
While in many research areas it is common practice to obtain quantitative quality information by the use of perceptual tests, in which the judgments of several human observers are being collected and carefully analyzed statistically, this approach has serious limitations for practical use, in particular because of the time consumption.
Motivated by this, and aided by the ever increasing available knowledge about the mechanisms of the human visual system, the quest for perceptual color image quality metrics that can adequately predict human quality judgments of complex images, has been on for several decades. However, unfortunately, the Holy Grail is yet to be found.
The current paper outlines the state of the art of this field, including benchmarking of existing metrics, presents recent research, and proposes promising areas for further work. Aspects that are covered in particular include new models and metrics for color image quality, and new frameworks for using the metrics to improve color image representation and reproduction algorithms.
BibTeX:
@inproceedings{Hardeberg2010,
  author = {Jon Yngve Hardeberg},
  title = {Color by Numbers -- Quantifying the Quality of Color Reproduction},
  booktitle = {5th European Conference on Colour in Graphics, Imaging, and Vision (CGIV)},
  address = {Joensuu, Finland},
  month = {June},
  year = {2010},
  pages = {429--430},
  note = {Keynote}
}
BibTeX:
@conference{Hardeberg2010a,
  author = {Jon Yngve Hardeberg},
  title = {Multispectral Color Imaging},
  booktitle = {Industrial Visionday},
  address = {Copenhagen, Denmark},
  month = {May},
  year = {2010},
  note = {Invited talk}
}
BibTeX:
@conference{Hardeberg2006a,
  author = {Jon Yngve Hardeberg},
  title = {Recent advances in acquisition and reproduction of multispectral images},
  booktitle = {EUSIPCO},
  year = {2006}
}
BibTeX:
@conference{Hardeberg2006b,
  author = {Jon Yngve Hardeberg},
  title = {Color science, color management, and color image quality},
  booktitle = {NOBIM},
  year = {2006}
}
BibTeX:
@article{Hardeberg2005b,
  author = {Jon Yngve Hardeberg},
  title = {Colorimetric Scanner Characterization},
  journal = {Acta Graphica},
  year = {2005},
  volume = {155}
}
Abstract: The quality of a multispectral color image acquisition system depends on many factors, the spectral sensitivity of the different channels being one of them. In a relatively common setup, a multispectral camera is being implemented by coupling a monochrome digital camera with a set of optical filters, typically mounted on a filter wheel. The properties of these filters is an important component of the system design. Different methods have been proposed for the design or selection of appropri ate filters. In this article we review several methods used for selection of an optimal subset of filters from a set of available filters. The different filter selection meth ods are subjected to a comprehensive evaluation procedure, in which their quality is evaluated mainly in terms of the ability of the resulting system to reconstruct scene spectral reflectances.
BibTeX:
@article{Hardeberg2004a,
  author = {Jon Yngve Hardeberg},
  title = {Filter Selection for Multispectral Color Image Acquisition},
  month = {Mar/Apr},
  journal = {The Journal of Imaging Science and Technology},
  year = {2004},
  volume = {48},
  number = {2},
  pages = {105-110},
  note = {ISBN / ISSN: 1062-3701}
}
Abstract: The quality of a multispectral color image acquisition system depends on many factors, the spectral sensitivity of the different channels being one of them. In a relatively common setup a multispectral camera is being implemented by coupling a monochrome digital camera with a set of optical filters, typically mounted on a filter wheel. The properties of these filters is an important component of the system design.
Different methods have been proposed for the design or selection of appropriate filters. In this paper we review several methods used for selection of an optimal subset of filters from a set of available filters. The different filter selection methods are subjected to a comprehensive evaluation procedure, in which their quality is evaluated mainly in terms of the ability of the resulting system to reconstruct scene spectral reflectances.
BibTeX:
@conference{Hardeberg2003a,
  author = {Jon Yngve Hardeberg},
  title = {Filter Selection for Multispectral Color Image Acquisition},
  booktitle = {PICS 2003: The PICS Conference, An International Technical Conference on The Science and Systems of Digital Photography, including the Fifth International Symposium on Multispectral Color Science},
  address = {Rochester, NY},
  month = {May},
  year = {2003},
  pages = {177-182},
  note = {ISBN / ISSN: 0-89208-245-3}
}
Abstract: How many components are needed to represent the spectral reflectance of a surface?What is the dimension of a spectral reflectance?How many image channels are needed for the acquisition of a multispectral colour image? Such and similar questions have been discussed extensively in the literature.We have done a survey of the literature concerning this topic,and have seen that there is a large variation in the answers.We propose a method to quantify the effective dimension of a set of spectral re- . ectances.The method is based on a Principal Component Analysis,and in particular on specific requirements for the accumulated energy of the principal components. We apply the analysis to . ve different databases of spectral re . ectances,and conclude that they have very different statistical properties.The effective dimension of a set of Munsell colour spectra is found to be 18,that of a set of natural object re . ectances 23,while the effective dimension of a set of re . ectances of pigments used in oil painting was only 13.
BibTeX:
@inproceedings{Hardeberg2002,
  author = {Jon Yngve Hardeberg},
  title = {On the Spectral Dimensionality of Object Colors},
  booktitle = {The First European Conference on Color in Graphics, Imaging and Vision (CGIV)},
  address = {Poitiers, France},
  month = {Apr},
  year = {2002},
  pages = {480-485},
  note = {ISBN / ISSN: 0-89208-239-9}
}
Abstract: Color image quality is becoming an increasingly important factor in the consumer imaging industry.Users of imaging devices such as Multi-Function Peripherals (MFP)have increasing expectations to the quality of the reproduced images.In this paper we address the subject of color image quality from a practical point of view,and from the point of view of a provider of imaging technology for consumer MFPs.We show how the notion of color image quality is ultimately tied to the preferences of the end users.Because of this,practical quality evaluation experiments involving a panel of human observers is a very useful tool to quantify color image quality.As an illustration,we then describe a color image quality evaluation experiment,which was carried out in order to benchmark the copy function of two MFP devices.
BibTeX:
@conference{Hardeberg2002a,
  author = {Jon Yngve Hardeberg},
  title = {Color Image Quality for Multi-Function Peripherals},
  booktitle = {PICS 2002: IS\&T's PICS Conference, An International Technical Conference on Digital Image Capture and Associated System, Reproduction and Image Quality Technologies},
  address = {Portland, Oregon, USA},
  month = {Apr},
  year = {2002},
  pages = {76-81},
  note = {ISBN / ISSN: 0-89208-238-0}
}
Abstract: The current paper provides methods to correct the artifact known as "red eye" by means of digital color image processing.This artifact is typically formed in amateur photographs taken with a built-in camera flash.To correct red eye artifacts,an image mask is computed by calculating a colorimetric distance between a prototypical reference �?¢â�??¬�?�??red eye�?¢â�??¬ï¿½color and each pixel of the image containing the red eye.Various image processing algorithms such as thresholding,blob analysis,and morphological filtering, are applied to the mask,in order to eliminate noise,reduce errors,and facilitate a more natural looking result.The mask serves to identify pixels in the color image needing correction,and further serves to identify the amount of correction needed. Pixels identified as having red eye artifacts are modified to a substantially monochrome color,while the bright specular reflection of the eye is preserved.
BibTeX:
@article{Hardeberg2002b,
  author = {Jon Yngve Hardeberg},
  title = {Digital Red Eye Removal},
  month = {Jul/Aug},
  journal = {The Journal of Imaging Science and Technology},
  year = {2002},
  volume = {46},
  number = {4},
  pages = {375-379},
  note = {ISBN / ISSN: 1062-3701}
}
Abstract: The goal of the work reported in this dissertation is to develop methods for the acquisition and reproduction of high quality digital colour images. To reach this goal it is necessary to understand and control the way in which the different devices involved in the entire colour imaging chain treat colours. Therefore we addressed the problem of colorimetric characterisation of scanners and printers, providing efficient and colorimetrically accurate means of conversion between a device-independent colour space such as the CIELAB space, and the device-dependent colour spaces of a scanner and a printer. First, we propose a new method for the colorimetric characterisation of colour scanners. It consists in applying a non-linear correction to the scanner RGB values followed by a 3rd order 3D polynomial regression function directly to CIELAB space. This method gives very good results in terms of residual colour differences. The method has been successfully applied to several colour image acquisition devices, including digital cameras. Together with other proposed algorithms for image quality enhancements it has allowed us to obtain very high quality digital colour images of fine art paintings. An original method for the colorimetric characterisation of a printer is then proposed. The method is based on a computational geometry approach. It uses a 3D triangulation technique to build a tetrahedral partition of the printer colour gamut volume and it generates a surrounding structure enclosing the definition domain. The characterisation provides the inverse transformation from the device-independent colour space CIELAB to the device-dependent colour space CMY, taking into account both colorimetric properties of the printer, and colour gamut mapping. To further improve the colour precision and colour fidelity we have performed another study concerning the acquisition of multispectral images using a monochrome digital camera together with a set of K>3 carefully selected colour filters. Several important issues are addressed in this study. A first step is to perform a spectral characterisation of the image acquisition system to establish the spectral model. The choice of colour chart for this characterisation is found to be very important, and a new method for the design of an optimised colour chart is proposed. Several methods for an optimised selection of colour filters are then proposed, based on the spectral properties of the camera, the illuminant, and a set of colour patches representative for the given application. To convert the camera output signals to device-independent data, several approaches are proposed and tested. One consists in applying regression methods to convert to a colour space such as CIEXYZ or CIELAB. Another method is based on the spectral model of the acquisition system. By inverting the model, we can estimate the spectral reflectance of each pixel of the imaged surface. Finally we present an application where the acquired multispectral images are used to predict changes in colour due to changes in the viewing illuminant. This method of illuminant simulation is found to be very accurate, and working on a wide range of illuminants having very different spectral properties. The proposed methods are evaluated by their theoretical properties, by simulations, and by experiments with a multispectral image acquisition system assembled using a CCD camera and a tunable filter in which the spectral transmittance can be controlled electronically.
BibTeX:
@phdthesis{Hardeberg1999,
  author = {Jon Yngve Hardeberg},
  title = {Acquisition and reproduction of colour images: colorimetric and multispectral approaches},
  school = {Ecole Nationale Superieure des Telecommunications},
  year = {1999},
  url = {http://www.colorlab.no/content/download/21936/215653/file/Jon_Y_Hardeberg_phd_thesis.pdf}
}
BibTeX:
@article{Hardeberg2008,
  author = {Jon Yngve Hardeberg and Eriko Bando and Marius Pedersen},
  title = {Evaluating colour image difference metrics for gamut-mapped images},
  month = {Aug},
  journal = {Coloration Technology},
  year = {2008},
  volume = {124},
  number = {4},
  pages = {243-253},
  url = {http://www3.interscience.wiley.com/cgi-bin/fulltext/121356959/PDFSTART}
}
Abstract: In this paper we describe the preliminary results of a collaborative research project conducted by researchers at Gjøvik University College and Lillehammer University College. The goal of the project is to develop methods and tools to improve the control of color information in the production and presentation of digital video. The project represents a unique attempt to bring together two scientific communities � graphic arts and television/video production � on a theme of common interest, namely color. Promising results have been obtained by using an innovative color warping algorithm for color correction in editing of digital video.
BibTeX:
@inproceedings{Hardeberg2002c,
  author = {Jon Yngve Hardeberg and Ivar Farup and {\O}yvind Kol\r{a}ss and Gudmund Stjernvang},
  title = {Color management for digital video: Color correction in the editing phase},
  booktitle = {29th International iarigai Research Conference. Proceedings: Advances in Graphic Arts \& Media Technology},
  address = {Lucerne, Switzerland},
  month = {Sep},
  year = {2002}
}
Abstract: The partial results of a collaborative research project conducted by researchers at Gjovik University College and Lillehammer University College are described in this paper. The goal of the project is fo develop methods and fools for improving the control of color information in the production and presentation of digital video. The project represenfs a unique attempt to bring together two scientific communities-graphic arts and television/video production - on a theme of common interest, namely color. The color quality achieved by a system for digital distribution and presentation of cinema commercials has been investigated. Results show that the "quality bottleneck" is fhe digital projector. The "business-type" projector does not yield sufficient image quality, especially in large theaters.
BibTeX:
@article{Hardeberg2005a,
  author = {Jon Yngve Hardeberg and Ivar Farup and Gudmund Stjernvang},
  title = {Color quality analysis of a system for digital distribution and projection of cinema commercials},
  month = {Apr},
  journal = {SMPTE Motion Imaging Journal},
  year = {2005},
  volume = {114},
  number = {4},
  pages = {146-151}
}
Abstract: In this paper we describe the partial results of a collaborative research project conducted by researchers at Gjøvik University
College and Lillehammer University College. The goal of the project is to develop methods and tools to improve the control of color information in the production and presentation of digital video. The project represents a unique attempt to bring together two scientific communities – graphic arts and television/video production – on a theme of common interest, namely color. We have investigated the color quality achieved by a system for digital distribution and presentation of cinema commercials. Our results show that the “quality bottleneck� is the digital projector. Especially in large theaters, the “business-type� projector does not yield sufficient image quality.
BibTeX:
@conference{Hardeberg2003b,
  author = {Jon Yngve Hardeberg and Ivar Farup and Gudmund Stjernvang},
  title = {Digital cinema commercials in Norway, is the quality good enough?},
  booktitle = {The SMPTE International Conference, D-Cinema and Beyond},
  address = {Milano, Italy},
  month = {Nov},
  year = {2003}
}
BibTeX:
@techreport{Hardeberg2003c,
  author = {Jon Yngve Hardeberg and Ivar Farup and Gudmund Stjernvang},
  title = {Proceedings from Gj{\o}vik Color Imaging Symposium 2003},
  year = {2003},
  number = {7}
}
BibTeX:
@conference{Hardeberg2013,
  author = {Jon Y. Hardeberg and Sony George and Ferdinand Deger and, Ivar Baarstad and Julio Ernesto Hernandez Palacios and Trond Løke},
  title = {Hyperspectral image capture and analysis of The Scream painted by Edvard Munch in 1893},
  booktitle = {MUNCH150 Conference},
  address = {Oslo, Norway},
  month = {June},
  year = {2013},
  url = {http://www.hf.uio.no/iakh/english/research/projects/aula-project/munch-150/}
}
BibTeX:
@inproceedings{Hardeberg2007,
  author = {Jon Yngve Hardeberg and Jeremie Gerhardt},
  title = {Towards spectral color reproduction},
  booktitle = {Ninth International Symposium on Multispectral Colour Science and Application},
  publisher = {IS\&T},
  year = {2007},
  pages = {16--22},
  note = {ISBN 978-0-89208-272-8}
}
BibTeX:
@inproceedings{Hardeberg2005,
  author = {Jon Yngve Hardeberg and Jeremie Gerhardt},
  title = {Caracterisation spectrale d'un systeme d'impression jet d'encre huit encres},
  booktitle = {Revue Traitement du Signal},
  year = {2005},
  volume = {21}
}
Abstract: The experimental setup of a 8-channel inkjet printing system intended for spectral color reproduction is proposed. A spectral model of the printer based on the Yule-Nielsen modified spectral Neugebauer equation is presented, discussed, and evaluated experimentally. Although the spectral and colorimetric precision of the printer model leaves room for improvement, the presented research forms an interesting foundation for further research in the field of spectral color reproduction.
BibTeX:
@inproceedings{Hardeberg2004,
  author = {Jon Yngve Hardeberg and Jeremie Gerhardt},
  title = {Characterization of an Eight Colorant Inkjet System for Spectral Color Reproduction},
  booktitle = {CGIV 2004 -- Second European Conference on Color in Graphics, Imaging and Vision},
  address = {Aachen, Germany},
  month = {Apr},
  year = {2004},
  pages = {263-267},
  note = {ISBN / ISSN: 0-89208-250-X}
}
BibTeX:
@techreport{Hardeberg2007a,
  author = {Jon Yngve Hardeberg and Peter Nussbaum and Ali Alsam and Ivar Farup},
  title = {Proceedings from Gj{\o}vik Color Imaging Symposium 2007},
  year = {2007},
  number = {4}
}
Abstract: For the third consecutive year Gjøvik University College and The Norwegian Color Research Laboratory organised an international symposium on colour imaging. Gjøvik Color Imaging Symposium 2005 took place November 30 and December 1, 2005, at Gjøvik University College in Gjøvik, Norway. The first day of the conference focused mainly on applied colour management, whereas the second day was devoted to current topics in colour imaging research, such as advanced colour management, spatial colour imaging, colour vision and colour constancy.
BibTeX:
@techreport{Hardeberg2006,
  author = {Jon Yngve Hardeberg and Peter Nussbaum and Ali Alsam and Ivar Farup},
  title = {Proceedings from Gj{\o}vik Color Imaging Symposium 2005},
  year = {2006},
  number = {9}
}
BibTeX:
@article{Hardeberg2007b,
  author = {Jon Yngve Hardeberg and Peter Nussbaum and Sylvain Roch and Ondrej Panak},
  title = {Time matters in soft proofing},
  journal = {Acta Graphica - Journal of Printing Science and Graphic Communication},
  year = {2007},
  volume = {19},
  number = {1-2},
  pages = {1-10},
  note = {ISSN 0353-4707}
}
BibTeX:
@inproceedings{Hardeberg2003,
  author = {Jon Yngve Hardeberg and Lars Seime and Trond Skogstad},
  title = {Colorimetric characterization of projection displays using a digital colorimetric camera},
  booktitle = {Projection Displays IX},
  address = {Santa Clara, CA, USA},
  month = {Mar},
  year = {2003},
  series = {Proceedings of SPIE/IS\&T},
  volume = {5002},
  pages = {51-61},
  note = {ISBN / ISSN: 0-8194-4802-8}
}
BibTeX:
@inproceedings{Hardeberg2015,
  author = {Jon Yngve Hardeberg and Raju Shrestha},
  title = {Multispectral colour imaging: Time to move out of the lab?},
  booktitle = {Mid-term meeting of the International Colour Association (AIC)},
  address = {Tokyo, Japan},
  month = {May},
  year = {2015},
  pages = {28--32}
}
BibTeX:
@mastersthesis{HIEP2013,
  author = {Ken LY MINH HIEP},
  title = {Perception-based Stereo Image Compression},
  school = {Gj{\o}vik University College},
  year = {2013}
}
BibTeX:
@mastersthesis{Hoest2012,
  author = {Annick van der Hoest},
  title = {Eye Contact in Leisure Video Conferencing},
  school = {Gj{\o}vik University College},
  year = {2012}
}
Abstract: Human colour vision is the result of a complex process involving topics ranging from physics of light to perception. Whereas the diversity of light entering the eye in principle span an infinite-dimensional vector space in terms of the spectral power distributions, the space of human colour perceptions is three dimensional. One important consequence of this is that a variety of colours can be visually matched by a mixture of only three adequately chosen reference lights. It has been observed that there exists one particular set of monochromatic reference lights that, according to a certain definition, is optimal for producing
colour matches. These reference lights are commonly denoted prime colours. In the present paper, we intend to rigorously show that the existence of prime colours is not particular to the human visual system as sometimes stated, but rather an algebraic consequence of the manner in which a kind of colorimetric functions called colour-matching functions are defined
and transformed. The solution is based on maximisation of a determinant determining the gamut size of the colour space spanned by the prime colours. Cramer’s rule for solving a set of linear equations is an essential part of the proof. By means of examples, it is shown that mathematically the optimal set of reference lights is not unique in general, and that the existence of a maximum determinant is not a necessary condition for the existence of prime colours.
BibTeX:
@article{Hornaes2005,
  author = {Hans Petter Horn{\ae}s and Jan Henrik Wold and Ivar Farup},
  title = {Colorimetry and prime colours - a theorem},
  journal = {Journal of Mathematical Biology},
  year = {2005},
  volume = {51},
  number = {2},
  pages = {144-156}
}
BibTeX:
@inproceedings{Imran2014a,
  author = {Imran, A.S. and Cheikh, F.A. and Kowalski, S.J.},
  title = {Media annotations in hyperlinked pedagogical platforms},
  booktitle = {Web and Open Access to Learning (ICWOAL), 2014 International Conference on},
  month = {Nov},
  year = {2014},
  pages = {1-6},
  keywords = {Web sites;computer aided instruction;interactive systems;multimedia computing;video signal processing;wavelet transforms;GUC;Gjøvik university college;HIP;automatic lecture video annotations;eLearning Websites;hyper interactive presenter;hyperlinked pedagogical platforms;media annotations;media modality;media rich eLearning platforms;media rich platforms;multimedia components;multimedia contents;pedagogical content retrieval;presentation slides annotation;state-of-the-art SIFT;wavelet energy transform;Educational institutions;Electronic learning;Hip;Media;Synchronization;Visualization;eLearning;hypermedia;interactive learning;multimedia;pedagogical platform},
  doi = {10.1109/ICWOAL.2014.7009233}
}
BibTeX:
@conference{Imran2010a,
  author = {Ali Imran and Fahad Guraya and Faouzi Alaya Cheikh},
  title = {A visual attention based reference free perceptual quality metric},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {Jul},
  year = {2010}
}
BibTeX:
@conference{Imran2014,
  author = {A. Imran and L. Rahadianti and F. Alaya Cheikh and S. Yayilgan},
  title = {OBJECTIVE KEYWORD SELECTION FOR LECTURE VIDEO ANNOTATION},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {Dec},
  year = {2014}
}
BibTeX:
@inproceedings{Imran2014b,
  author = {Imran, A.S. and Siakidis, C. and Cheikh, F.A. and Kowalski, S.J.},
  title = {3-dimensional tag clouds for multimedia driven pedagogical platforms},
  booktitle = {Computer Applications and Information Systems (WCCAIS), 2014 World Congress on},
  month = {Jan},
  year = {2014},
  pages = {1-5},
  keywords = {Web sites;cloud computing;computer aided instruction;meta data;multimedia computing;text analysis;3-dimensional tag clouds;3dimensional tag clouds;PowerPoint presentations;e-learning Web sites;educational Web sites;hyperlinks;lecture videos;media modality;meta-data information;multimedia component;multimedia contents;text document;visual representation;word cloud;Electronic publishing;Image color analysis;Information services;Navigation;Optimization;Satellite broadcasting;Visualization;E-learning;hypermedia;pedagogical;tag clouds;word cloud},
  doi = {10.1109/WCCAIS.2014.6916545}
}
BibTeX:
@phdthesis{Imran2013a,
  author = {Ali Shariq Imran},
  title = {Media Content Analysis for Creation and Annotation of Video Learning Objects},
  month = {May},
  school = {Gj{\o}vik University College, and Oslo University, Norway},
  year = {2013}
}
BibTeX:
@inproceedings{Imran2012a,
  author = {Ali Shariq Imran and Sukalpa Chanda and Faouzi Alaya Cheikh and Katrin Franke and Umapada Pal},
  title = {Cursive Handwritten Segmentation and Recognition for Instructional Videos},
  booktitle = {Eighth International Conference on Signal Image Technology and Internet Based Systems},
  address = {Sorrento, Naples, Italy},
  month = {Nov},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {155--160}
}
Abstract: This paper proposes a reference free perceptual quality metric for blackboard lecture images. The text in the image is mostly affected by high compression ratio and de-noising filters which cause blocking and blurring artifacts. As a result the perceived text quality of the blackboard image degrades. The degraded text is not only difficult to read by humans but it also makes the optical character recognition task even more difficult. Therefore, we put our effort firstly to estimate the presence of these artifacts and then we used it in our proposed quality metric. The blocking and blurring features are extracted from the image content on block boundaries without the presence of reference image. Thus it makes our metric reference free. The metric also uses the visual saliency model to mimic the human visual system (HVS) by focusing only on the distortions in perceptually important regions, i.e. those regions which contains the text. Moreover psychophysical experiments are conducted that show very good correlation between the mean opinion score and quality scores obtained from our reference free perceptual quality metric (RF-PQM). The correlation results are also compared with standard reference and reference free metric.
BibTeX:
@conference{Imran2010,
  author = {Ali Shariq Imran and Faouzi Alaya Cheikh},
  title = {Blind Image Quality Metric For Blackboard Lecture Images},
  booktitle = {European Signal Processing Conference (EUSIPCO)},
  address = {Aalborg, Denmark},
  month = {Aug},
  year = {2010}
}
BibTeX:
@inproceedings{Imran2012,
  author = {Ali Shariq Imran and Alejandro Moreno and Faouzi Alaya Cheikh},
  title = {Exploiting Visual Cues in Non-Scripted Lecture Videos for Multi-modal Action Recognition},
  booktitle = {Eighth International Conference on Signal Image Technology and Internet Based Systems},
  address = {Sorrento, Naples, Italy},
  month = {Nov},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {8--14}
}
BibTeX:
@inproceedings{Imran2013,
  author = {Ali Shariq Imran and Laksmita Rahadianti and Faouzi Alaya Cheikh and Sule Yildirim Yayilgan},
  title = {Semantic Keyword Selection for Automatic Video Annotation},
  booktitle = {Signal-Image Technology \& Internet-Based Systems (SITIS)},
  publisher = {IEEE},
  year = {2013},
  pages = {241--246}
}
Abstract: The archives of motion pictures represent an important part of precious cultural heritage. But, these collections of cinematography are vulnerable to different types of distortions�specially, the effect caused by chemical support on which they are recorded becomes unstable with time, unless they are stored at low temperatures. Some defects on color movies, such as bleaching, are irreversible and hence it is beyond the capability of digital restoration process. We propose here an automatic color correction technique which eventually automates the color fading restoration process. The proposed method is based on STRESS model, an automatic image enhancement technique that deals with correcting color images which are more or less perceptually acceptable to the Human Visual System. We also propose some preprocessing techniques which will be applied to the distorted images in prior to apply STRESS algorithm. These preprocessing techniques, which include Principle Component Analysis (PCA) and some sort of saturation enhancement, will ultimately make the resulting image more appealing and acceptable to Human Visual System.
BibTeX:
@mastersthesis{Islam2010a,
  author = {ABM Tariqul Islam},
  title = {Spatio-Temporal Colour Correction of Strongly Degraded Films},
  school = {Gj{\o}vik University College},
  year = {2010},
  keywords = {Digital film restoration, automatic colour correction, colour constancy, image enhancement, spatial color algorithms.}
}
Abstract: he archives of motion pictures represent an important part of precious cultural heritage. Unfortunately, these cinematography collections are vulnerable to different distortions such as colour fading which is beyond the capability of photochemical restoration process. Spatial colour algorithms-Retinex and ACE provide helpful tool in restoring strongly degraded colour films but, there are some challenges associated with these algorithms. We present an automatic colour correction technique for digital colour restoration of strongly degraded movie material. The method is based upon the existing STRESS algorithm. In order to cope with the problem of highly correlated colour channels, we implemented a preprocessing step in which saturation enhancement is performed in a PCA space. Spatial colour algorithms tend to emphasize all details in the images, including dust and scratches. Surprisingly, we found that the presence of these defects does not affect the behaviour of the colour correction algorithm. Although the STRESS algorithm is already in itself more efficient than traditional spatial colour algorithms, it is still computationally expensive. To speed it up further, we went beyond the spatial domain of the frames and extended the algorithm to the temporal domain. This way, we were able to achieve an 80 percent reduction of the computational time compared to processing every single frame individually. We performed two user experiments and found that the visual quality of the resulting frames was significantly better than with existing methods. Thus, our method outperforms the existing ones in terms of both visual quality and computational efficiency.
BibTeX:
@inproceedings{Islam2011,
  author = {ABM Tariqul Islam and Ivar Farup},
  title = {Spatio-temporal colour correction of strongly degraded movies},
  booktitle = {Color Imaging XVI: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2011},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {7866},
  pages = {78660Z}
}
Abstract: Development and implementation of spatial color algorithms has been an active field of research in image processing for the last few decades. A number of investigations have been carried out so far in mimicking the properties of the human visual system (HVS). Various algorithms and models have been developed, but they produce more or less neutral output. Some applications demand the preservation of appearance of the original image along with the enhancement performed by these models. It is our attempt in this paper to present a number of techniques that are designed to satisfy the requirements of those applications. Our techniques work in two general stages. In the first stage, properties of the original image are extracted and stored. In the second stage, the resulting images from the image enhancement models are processed with those properties. Most of these techniques perform quite well for different categories of images. We combine different approaches such as gamma, scaling, linear, scaling and clipping to preserve properties like color cast, maximum and minimum channel value etc. Our methods have been extended for Low-key and High-key images as well.
BibTeX:
@conference{Islam2010,
  author = {ABM Tariqul Islam and Ivar Farup},
  title = {Enhancing the output of spatial color algorithms},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {Jul},
  year = {2010}
}
BibTeX:
@mastersthesis{Pelegrina2014,
  author = {Antonio Pelegrina Jimenez},
  title = {Evaluation of Image Quality of State-of-art CT Vendors in the Norwegian Market},
  school = {Gj{\o}vik University College},
  year = {2014}
}
BibTeX:
@mastersthesis{Jimenez2014,
  author = {Samuel Jimenez},
  title = {Physical interaction in augmented environments},
  school = {University of Jean Monnet, France/Gj{\o}vik University College, Norway},
  year = {2014}
}
BibTeX:
@mastersthesis{JOURNES2014,
  author = {Franck JOURNES},
  title = {A study of image quality assessment and color image reconstruction algorithms for mono-sensor camera},
  school = {University of Jean Monnet, France/Gj{\o}vik University College, Norway},
  year = {2014}
}
BibTeX:
@article{Khan2014,
  author = {Khan, Fahad Shahbaz and Beigpour, Shida and van de Weijer, Joost and Felsberg, Michael},
  title = {Painting-91: a large scale database for computational painting categorization},
  journal = {Machine Vision and Applications},
  publisher = {Springer Berlin Heidelberg},
  year = {2014},
  pages = {1-13},
  keywords = {Painting categorization; Visual features; Image classification},
  url = {http://dx.doi.org/10.1007/s00138-014-0621-6},
  doi = {10.1007/s00138-014-0621-6}
}
Abstract: The authors present a new framework for algorithms for a wide range of image enhancement and reproduction applications, named STRESS: Spatio-Temporal Retinex-inspired Envelope with Stochastic Sampling. The algorithms work by recalculating each pixel using envelopes for local upper and lower bounds in the image. The envelopes are obtained sampling neighbor pixels and can be interpreted as local reference maximum and minimum. This approach derives from a computational simplification of previous spatial color algorithms like Retinex or ACE. With the proposed method, various tasks such as local contrast stretching, automatic color correction, high dynamic range image rendering, spatial color gamut mapping, and color to grayscale conversion can be performed with good results. The algorithm exhibits behaviors in line with some aspects of the human visual system, e.g., simultaneous contrast.
BibTeX:
@article{Kolaas2011,
  author = {{\O}yvind Kol{\aa}s and Ivar Farup and Alessandro Rizzi},
  title = {Spatio-Temporal Retinex-Inspired Envelope with Stochastic Sampling(STRESS): A framework for spatial color algorithms},
  month = {Aug},
  journal = {Journal of Imaging Science and Technology},
  year = {2011},
  volume = {55},
  number = {4},
  pages = {1--10},
  url = {http://www.imaging.org/IST/store/epub.cfm?abstrid=44604}
}
BibTeX:
@misc{Kolaas2002,
  author = {{\O}yvind Kol\r{a}s},
  title = {AutoColorist - Color correction in digital video},
  year = {2002},
  note = {Bachelor thesis (BA Computers and Multimedia). Gj{\o}vik University College}
}
Abstract: We present a new efficient hue- and edge-preserving spatial color gamut mapping algorithm. The initial computation of the algorithm is to project all out-of-gamut colors to the destination gamut boundary towards the center of the gamut. Based on this spatially invariant hue-preserving clipping of the image, we construct a greyscale map indicating the amount of compression performed. This map can be spatially modified by applying an edge-preserving smoothing filter that never decreases the amount of compression applied to an individual pixel. Finally, the colors of the original image are compressed towards the gamut center according to the filtered map. Examples on real images show that the algorithm gives interesting results.
BibTeX:
@inproceedings{Kolaas2007,
  author = {{\O}yvind Kol\r{a}s and Ivar Farup},
  title = {Efficient Hue-preserving and Edge-preserving Spatial Color Gamut Mapping},
  booktitle = {15th Color Imaging Conference},
  address = {Nov},
  publisher = {IS\T},
  year = {2007},
  pages = {207-212}
}
BibTeX:
@mastersthesis{Kominkova2008b,
  author = {Barbora Kominkova},
  title = {Comparison of two eye tracking devices used on printed images},
  school = {Gj{\o}vik University College and University of Pardubice},
  year = {2008},
  url = {http://www.colorlab.no/content/download/21931/215638/file/Bara_Kominkova_Master_thesis.pdf}
}
BibTeX:
@conference{Kominkova2008a,
  author = {Barbora Kominkova and Jon Yngve Hardeberg and Marius Pedersen and Marie Kaplanova},
  title = {Comparison of eye tracking devices used on printed images},
  booktitle = {Scandinavian Workshop on Applied Eye-tracking},
  address = {Lund, Sweden},
  month = {Apr},
  year = {2008}
}
Abstract: Eye tracking as a quantitative method for collecting eye movement data, requires the accurate knowledge of the eye position, where eye movements can provide indirect evidence about what the subject sees. In this study two eye tracking devices have been compared, Head-mounted Eye Tracking Device (HED) and Remote Eye Tracking Device (RED). It has been found out precisions of both devices, gaze position accuracy and stability of the calibration. For the HED it has been investigated how to register data to real-world coordinates. Whereas, since coordinates collected by the eye tracker are relative to the position of the subject's head and not relative to the actual stimuli as in the RED case. Result shows that the precision with time delay get worse for both eye tracking devices. The precision of RED is better than the HED and the difference between them is around 10 - 16 pixels (5.584 mm). Distribution of gaze position for HED and RED was expressed by a percentage representation of the point of regard in areas maked defined by the viewing angle. For both eye tracking devices the gaze position accuracy has been 95-99% at 1.5-2 degree viewing angle. The stability of the calibration was investigated on the end of the experiment and the result is not statistically significant. But the distribution of the gaze position is larger at the end of the experiment than at the beginning.
BibTeX:
@inproceedings{Kominkova2008,
  author = {Barbora Kominkova and Marius Pedersen and Jon Yngve Hardeberg and Marie Kaplanova},
  title = {Comparison of eye tracking devices used on printed images},
  booktitle = {Human Vision and Electronic Imaging VIII (HVEI-08)},
  address = {San Jose, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2008},
  series = {SPIE proceedings},
  volume = {6806},
  keywords = {Eye tracking, precision, gaze position, stability of calibration.}
}
BibTeX:
@inproceedings{Koppen2008,
  author = {Mario Koppen and Katrin Franke},
  title = {A Color Morphology based on Pareto-Dominance Relation and Hypervolume Measure},
  booktitle = {CGIV 2008 - Fourth European Conference on Color in Graphics, Imaging and Vision},
  address = {Terrassa, Spain},
  month = {Jun},
  publisher = {IS/\&T},
  year = {2008}
}
BibTeX:
@inproceedings{Koeppen2007,
  author = {Mario Koppen and Katrin Franke},
  title = {A generalized approach of color morphology by means of Pareto-set theory},
  booktitle = {GCIS Proceedings},
  year = {2007},
  pages = {29}
}
BibTeX:
@inproceedings{Kvitle2015,
  author = {Anne Kristin Kvitle and Phil Green and Peter Nussbaum},
  title = {Adaptive color rendering of maps for users with color vision deficiencies},
  booktitle = {Color Imaging XX: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9395},
  pages = {9395-42},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2109920}
}
BibTeX:
@article{Lapray2014,
  author = {Pierre-Jean Lapray and Xingbo Wang and Jean-Baptiste Thomas, and Pierre Gouton},
  title = {Multispectral Filter Arrays: Recent Advances and Practical Implementation},
  journal = {Sensors},
  year = {2014},
  volume = {14},
  number = {11},
  pages = {21626--21659},
  url = {http://www.mdpi.com/1424-8220/14/11/21626},
  doi = {10.3390/s141121626}
}
BibTeX:
@inproceedings{Lau2005,
  author = {Daniel L. Lau and Jon Yngve Hardeberg},
  title = {Geometric alignment of a multiprimary display built by stacking six DLP projectors},
  booktitle = {Proceedings of the 10th Congress of the International Colour Association},
  address = {Granada, Spain},
  month = {May},
  year = {2005},
  pages = {133-136},
  note = {ISBN 84-609-5163-4}
}
BibTeX:
@article{LeMoan2013,
  author = {Le Moan, Steven and Mansouri, Alamin and Hardeberg, Jon Yngve and Voisin, Yvon},
  title = {Saliency for Spectral Image Analysis},
  journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
  year = {2013},
  volume = {6},
  number = {6},
  pages = {2475-2479},
  url = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6515145&queryText%3DSaliency+for+Spectral+Image+Analysis}
}
BibTeX:
@mastersthesis{Lefloch2007,
  author = {Damien Lefloch},
  title = {People counting based on video analysis},
  school = {Gj{\o}vik University College and University de Bourgogne},
  year = {2007},
  url = {http://www.colorlab.no/content/download/21981/216266/file/Damien_Lefloch_Master_thesis.pdf}
}
BibTeX:
@inproceedings{Lefloch2008,
  author = {Damien Lefloch and Faouzi Alaya Cheikh and Jon Yngve Hardeberg and Pierre Goutton and Romain Picot-Clemente},
  title = {Real-time people counting system using a single video camera},
  booktitle = {Real-Time Image Processing},
  month = {Jan},
  publisher = {SPIE},
  year = {2008},
  volume = {6811}
}
BibTeX:
@inproceedings{LeMoan2015,
  author = {Steven LeMoan and Sony T. George and Marius Pedersen and Jana Blahova and Jon Yngve Hardeberg},
  title = {A database for spectral image quality},
  booktitle = {Image Quality and System Performance XII},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9396},
  pages = {9396-25},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2109945}
}
Abstract: A large set of data, comprising the spectral reflectances of real surface colours, has been accumulated. The data comprise 16 groups with different materials and include 85,879 measured spectra. From these data, CIELAB colorimetric coordinates were calculated under CIE illuminant D50 and the CIE 1931 standard colorimetric (2°) observer. Several published colour gamuts including those developed by Pointer and ISO reference colour gamut [ISO Graphic Technology Standard 12640-3:2007] were compared using the present data set. It was found that the Pointer gamut is smaller than the new real data in most of the colour regions. The results also showed that the ISO reference colour gamut is larger than the new real accumulated data in most regions. The present finding indicates that there is a need to derive a new colour gamut based on the newly accumulated data for common applications.
BibTeX:
@article{Li2013,
  author = {Changjun Li and M. Ronnier Luo and M. R. Pointer and Phil Green},
  title = {Comparison of real colour gamuts using a new reflectance database},
  journal = {Color Research \& Application},
  year = {2013},
  url = {http://onlinelibrary.wiley.com/doi/10.1002/col.21827/abstract}
}
BibTeX:
@mastersthesis{Liu2013a,
  author = {Xinwei Liu},
  title = {{CID:IQ} - A new image quality database},
  school = {Gj{\o}vik University College},
  year = {2013}
}
BibTeX:
@conference{Liu2013,
  author = {Xinwei Liu and Jon Yngve Hardeberg},
  title = {Fog removal algorithms: survey and perceptual evaluation},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {June},
  year = {2013}
}
BibTeX:
@incollection{Liu2014,
  author = {Liu, Xinwei and Pedersen, Marius and Hardeberg, JonYngve},
  title = {CID:IQ - A New Image Quality Database},
  booktitle = {Image and Signal Processing},
  publisher = {Springer International Publishing},
  year = {2014},
  series = {Lecture Notes in Computer Science},
  volume = {8509},
  pages = {193-202},
  keywords = {Image Quality Metric; Noise; Blur; Image Compression; Gamut Mapping; Viewing Distance; Perceptual Experiment},
  url = {http://dx.doi.org/10.1007/978-3-319-07998-1_22},
  doi = {10.1007/978-3-319-07998-1_22}
}
BibTeX:
@inproceedings{Ly2013,
  author = {Minh Hiep Ly and Jon Yngve Hardeberg},
  title = {Introducing modern memory color},
  booktitle = {12th Congress of the International Colour Association (AIC)},
  address = {Newcastle, UK},
  month = {July},
  year = {2013}
}
Abstract: The purpose of this main project was to develop a software application for performing panel tests
for comparing images on a computer monitor. The goal of this comparison is to determine if
pictures, which correspond to specific image data processing algorithms, have significant
difference in quality. If the difference is noticed, rank of algorithms can be formed and the best
algorithm selected. The numerous tests can reveal that one specific method is frequently first in a
row. If such situation is encountered, this algorithm can be set as an industrial standard for image
data processing which matches specific requirements. Then users, seeing the obvious leader in
particular industry section, can more easily decide and avoid ambiguity when choosing the most
suitable solution meeting their needs.
The software functionality has to include the most important features expected. The
application should support different kinds of tests (paired comparison, category judgement) for a
variety of tests to perform and to achieve more accurate final results. It would be convenient to
have a module for administration (users, images, etc.), which simplifies the researches work during
the test period. The software also must have a module for simple data analysis according to
standardized statistical methods. It must be possible to export the empirical data in a suitable
format for compatibility with other wide-used software. The application for performing image
quality tests was called QuickEval, which stands for “quick evaluation�.
Viewing statistical results just like a string of numbers don’t give the expected
picturesqueness for the researcher. Seeking for clearer evaluation and to present the results in a
more suitable format for human’s eye, we were suggested to create a module for drawing charts
based on data acquired by the comparison module. This improved overall functionality of the
developed software application and gave even more ideas for further expandability. The module
for drawing graphics has a name of ChartDrawer and this explains its function, obviously.
BibTeX:
@misc{Malakauskas2003,
  author = {Mantas Malakauskas and Gediminas Montvilas},
  title = {Panel testing for image quality},
  month = {May},
  year = {2003},
  note = {Bachelor thesis (BEng Computer Science).Gj{\o}vik University College},
  url = {http://www.colorlab.no/content/download/21984/216275/file/Malakauskas_Bachelor_thesis.pdf}
}
BibTeX:
@article{Mansouri2005,
  author = {Alamin Mansouri and F. S. Marzani and Jon Yngve Hardeberg and Pierre Gouton},
  title = {Optical Calibration of a Multispectral Imaging System based on Interference Filters},
  month = {Feb},
  journal = {Optical Engineering},
  year = {2005},
  volume = {44},
  number = {2},
  pages = {1--12}
}
BibTeX:
@article{Mansouri2008,
  author = {Alamin Mansouri and Tadeusz Sliwa and Jon Yngve Hardeberg and Yvon Voisin},
  title = {Representation and estimation of spectral reflectances using projection on PCA and wavelet bases},
  month = {Dec},
  journal = {Color Research \& Application},
  year = {2008},
  volume = {33},
  number = {6},
  pages = {485-493}
}
BibTeX:
@conference{Mansouri2008a,
  author = {Mansouri, Alamin and Sliwa, Tadeusz and Hardeberg, Jon Yngve and Voisin, Yvon},
  title = {An adaptive-PCA algorithm for reflectance estimation from color images},
  booktitle = {19th International Conference on Pattern Recognition},
  month = {Dec},
  year = {2008},
  pages = {1-4},
  url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4761120&isnumber=4760915}
}
BibTeX:
@inproceedings{Mansouri2007,
  author = {Alamin Mansouri and Tedeusz Sliwa and Jon Yngve Hardeberg and Yvon Voisin},
  title = {New decomposition basis for reflectance recovery from multispectral imaging systems},
  booktitle = {GCIS2007 Proceedings},
  year = {2007},
  pages = {75-82}
}
BibTeX:
@mastersthesis{Marijanovic2012,
  author = {Kristina Marijanovic},
  title = {Spectral print reproduction - modelling and feasibility},
  school = {Gj{\o}vik University College},
  year = {2012}
}
BibTeX:
@inproceedings{Marin2006,
  author = {Ambroise Marin and David Connah and Audrey Roman and Jon Yngve Hardeberg},
  title = {Robustness of texture parameters for color texture analysis},
  booktitle = {Machine Vision Applications in Industrial Inspection XIV},
  address = {San Jose, California},
  month = {Jan},
  year = {2006},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {6070},
  pages = {47-56}
}
BibTeX:
@incollection{MartinezCanada2014,
  author = {Martinez Canada, Pablo and Pedersen, Marius},
  title = {Exposure Fusion Algorithm Based on Perceptual Contrast and Dynamic Adjustment of Well-Exposedness},
  booktitle = {Image and Signal Processing},
  publisher = {Springer International Publishing},
  year = {2014},
  series = {Lecture Notes in Computer Science},
  volume = {8509},
  pages = {183-192},
  keywords = {Exposure Fusion; Tone Mapping; Perceptually Based Image Processing; Contrast; Saturation; Well-Exposedness},
  url = {http://dx.doi.org/10.1007/978-3-319-07998-1_21},
  doi = {10.1007/978-3-319-07998-1_21}
}
Abstract: This paper presents further results from the research project “Translucent facades” that was initially presented at AIC 2012. In the present paper the results of the colorimetric measurements are presented and compared to the results from visual matching carried out by observers. Interestingly, those two methods lead to the same results for all colour samples examined in the project. Both methods, the visual matching and the colorimetric measurements, may be used in the examination of colour appearance, particularly in research projects dealing with glazing solutions.
BibTeX:
@inproceedings{Matusiak2013,
  author = {Barbara Matusiak and Karin Fridell Anter and Peter Nussbaum and Kine Angelo and Aditya Suneel Sole and Jon Yngve Hardeberg},
  title = {Colour shift: perceived and measured},
  booktitle = {12th Congress of the International Colour Association (AIC)},
  address = {Newcastle, UK},
  month = {July},
  year = {2013}
}
BibTeX:
@mastersthesis{Medina2012,
  author = {Victor Medina},
  title = {A survey of techniques for depth extraction in films},
  school = {Gj{\o}vik University College},
  year = {2012}
}
Abstract: In this study we investigate the feasibility of using an inexpensive webcam to correct the projection display non-uniformity. Two main approaches are proposed and evaluated, the colorimetric characterization and the global characterization. Both approaches are based on displaying images, which should ideally have uniform color distribution, capturing the displayed image with the webcam, and using this captured image, creating a correction function, which is then applied to images in order to, correct them. Our results show that the feasibility of the proposed methods depends a lot on the qualities of the equipment involved. For standard webcams it is generally difficult to obtain reliable device-independent color measurements needed for the colorimetric characterization.
BibTeX:
@inproceedings{Menu2005,
  author = {Gabrielle Menu and Loic Peigne and Jon Yngve Hardeberg and Pierre Gouton},
  title = {Correcting projection display nonuniformity using a webcam},
  booktitle = {Color Imaging X: Processing, Hardcopy, and Applications},
  address = {San Jose, California},
  month = {Jan},
  year = {2005},
  pages = {364-373},
  note = {ISBN / ISSN: 0-8194-5640-3}
}
Abstract: In the everyday use of projection displays devices calibration is rarely a considered issue. This can lead to a projected result that widely diverts from the intended appearance. In 2006 Raja Bala and Karen Braun presented a camera based calibration method for projection displays. This method aims to easily achieve a quick and decent calibration with only the use of a consumer digital photo camera. In this masters thesis the method has been implemented and investigated. The first goal was to investigate the methods performance, and thereby possibly verify and justify the use of this method. Secondly extensions were added to the method with the aim to improve method performance. Though some factors in the method have been found troublesome, the method is confirmed to work quite well. But experiments show that this calibration approach might be more effective for some projection displays then others. When adding extensions to this method it enhanced performance results even further. And a extended version of the original model gives the best results in the experiments performed. Conclusions have been drawn on the basis of numeric and visual evaluations.
BibTeX:
@mastersthesis{Mikalsen2007,
  author = {Espen B\r{a}rdsnes Mikalsen},
  title = {Verification and extention of a camera based calibration method for projector displays},
  school = {Gj{\o}vik University College},
  year = {2007},
  url = {http://www.colorlab.no/content/download/21932/215641/file/Espen_Mikalsen_Master_thesis.pdf}
}
BibTeX:
@inproceedings{Mikalsen2008,
  author = {Espen B\r{a}rdsnes Mikalsen and Jon Yngve Hardeberg and Jean-Baptiste Thomas},
  title = {Verification and extension of a camera-based end-user calibration method for projection displays},
  booktitle = {CGIV 2008 - Fourth European Conference on Color in Graphics, Imaging and Vision},
  address = {Terrassa, Spain},
  month = {Jun},
  publisher = {IS/\&T},
  year = {2008}
}
BibTeX:
@inproceedings{Moan2015,
  author = {Steven Le Moan and Ludovic Gustafsson Coppel},
  title = {Perceived quality of printed images on fluorescing substrates under various illuminations},
  booktitle = {Mid-term meeting of the International Colour Association (AIC)},
  address = {Tokyo, Japan},
  month = {May},
  year = {2015}
}
BibTeX:
@inproceedings{Moan2012a,
  author = {Steven Le Moan and Ferdinand Deger and Alamin Mansouri and Yvon Voisin and Jon Y. Hardeberg},
  title = {Salient Pixels and Dimensionality Reduction for Display of Multi/Hyperspectral Images},
  booktitle = {Image and Signal Processing},
  month = {June},
  publisher = {Springer},
  year = {2012},
  series = {Lecture Notes in Computer Science (LNCS)},
  volume = {7340},
  pages = {9--16},
  url = {http://www.springerlink.com/content/a60277510r640358/}
}
BibTeX:
@inproceedings{Moan2011,
  author = {Steven Le Moan and Alamin Mansouri and Jon Hardeberg and Yvon Voisin},
  title = {Saliency in Spectral Images},
  booktitle = {Image Analysis},
  publisher = {Springer},
  year = {2011},
  series = {Lecture Notes in Computer Science},
  volume = {6688},
  pages = {114--123},
  url = {http://colorlab.no/content/download/32394/381250/file/Moan2011Poster.pdf}
}
BibTeX:
@conference{Moan2011b,
  author = {Steven Le Moan and Alamin Mansouri and Jon Y. Hardeberg and Yvon Voisin},
  title = {Visualization of spectral images: a comparative study},
  booktitle = {International conference on Pervasive Computing, Signal Processing and Applications},
  address = {Gj{\o}vik, Norway},
  month = {September},
  year = {2011}
}
BibTeX:
@inproceedings{Moan2011d,
  author = {Steven Le Moan and Alamin Mansouri and Jon Y. Hardeberg and Yvon Voisin},
  title = {Saliency-Based Band Selection For Spectral Image Visualization},
  booktitle = {Color and Imaging Conference},
  address = {San Jos{\'e}, California, USA},
  month = {November},
  year = {2011},
  pages = {363--368}
}
Abstract: In this paper, a new color visualization technique for multi and hyperspectral images is proposed. This method is based on a maximization of the perceptual distance between the scene endmembers as well as natural constancy of the resulting images. The stretched CMF principle is used to transform reflectance into values in the CIE L*a*b* colorspace combined with an a priori known segmentation map for separability enhancement between classes. Boundaries are set in the a*b* subspace to balance the natural palette of colors in order to ease interpretation by a human expert. Convincing results on two different images are shown.
BibTeX:
@conference{Moan2010a,
  author = {Steven Le Moan and Alamin Mansouri and Jon Yngve Hardeberg and Yvon Voisin},
  title = {A class-separability-based method for multi/hyperspectral image color visualization},
  booktitle = {International Conference on Image Processing (ICIP)},
  address = {Hong Kong},
  month = {Sep},
  year = {2010}
}
Abstract: In this paper, a new approach for the recognition and classification of convex objects in color images is presented. It is based on a collaboration between color quantization, mathematical morphology and reflectance estimation from RGB data. This yields a robust algorithm regarding the conditions of illumination, the color sensor used for acquisition, as well as the shape/overlapping ambiguities among the objects. One singularity of this work is the use of mathematical morphology in two distinct topologies: first in the entire image, for segmentation purposes, then locally, to enhance the classification of each object. A resolution reduction is used to alleviate the effect of local disturbances such as noise or natural impurities on the objects. The method’s efficiency and usefulness are illustrated on the particular task of coffee beans sorting.
BibTeX:
@inproceedings{Moan2010,
  author = {Steven Le Moan and Alamin Mansouri and Tadeusz Sliwa and Mada{\'i}n P{\'e}rez and Patricio and Yvon Voisin and Jon Y. Hardeberg},
  title = {Convex Objects Recognition and Classification Using Spectral and Morphological Descriptors},
  booktitle = {5th European Conference on Colour in Graphics, Imaging, and Vision (CGIV)},
  address = {Joensuu, Finland},
  month = {June},
  year = {2010},
  pages = {293--299}
}
BibTeX:
@article{Moan2011a,
  author = {S. Le Moan and A. Mansouri and Y. Voisin and J. Hardeberg},
  title = {A Constrained Band Selection Method Based on Information Measures for Spectral Image Color},
  journal = {IEEE Transactions on Geoscience and Remote Sensing},
  year = {2011}
}
BibTeX:
@conference{Moan2011c,
  author = {Steven Le Moan and Alamin Mansouri and Yvon Voisin and Jon Y. Hardeberg},
  title = {S{\'e}lection de bandes pour la visualisation d'images spectrales : une approche bas{\'e}e sur l'{\'e}tude de saillance},
  booktitle = {XXIIIe Colloque GRETSI - Traitement du Signal et des Images},
  address = {Bordeaux, France},
  month = {September},
  year = {2011}
}
Abstract: We propose a new method for the visualization of spectral images. It involves a perception-based spectrum segmentation using an adaptable thresholding of the stretched CIE standard observer colormatching functions. This allows for an underlying removal of irrelevant channels, and, consequently, an alleviation of the computational burden of further processings. Principal Components Analysis is then used in each of the three segments to extract the Red, Green and Blue primaries for final visualization. A comparison framework using two different datasets shows the efficiency of the proposed method.
BibTeX:
@inproceedings{Moan2010b,
  author = {Steven Le Moan and Alamin Mansouri and Yvon Voisin and Jon Y. Hardeberg},
  title = {An Efficient Method for the Visualization of Spectral Images Based on a Perception-Oriented Spectrum Segmentation},
  booktitle = {Advances in Visual Computing - 6th International Symposium},
  address = {Las Vegas, NV},
  month = {Nov},
  publisher = {Springer},
  year = {2010},
  series = {Lecture Notes in Computer Science},
  pages = {361-370},
  doi = {10.1007/978-3-642-17274-8_48}
}
Abstract: For those of us who are walking through life with a normal sight and no need for glasses, it is difficult to imagine how weak-sighted persons see the world. Watching a flower is to us simply an image of a flower; whether it is red, yellow, blue, tall or small. However this image is not at all the same for everyone; and due to a visual impairment the perceived image can be very different and maybe even changing while watching the same flower. In order to spread knowledge and understanding about vision deficiencies in our diverse society we have developed an eye disease simulator. The software is developed using the Java programming language to be accessible on different computing platforms. The user can load an image either from a file or from a webcam. One of several eye diseases can be selected, and by clicking in a certain location in the image with the mouse, a realistic simulation is displayed of how this image would appear when the gaze is directed at this location. Currently the simulated diseases include age-related macular degeneration (AMD), cataract, retinis pigmentosa, diabetic retinopathy, and glaucoma. The Eye Disease Simulator will be made available online through the World Wide Web, and will initially be targeted specifically towards children. By increasing children’s knowledge about this kind of disability we hope to raise their awareness, dignity and respect. However, through the innovative process of designing, developing, and testing this software, we have discovered that it could also be an important tool for professionals who often face challenges related to visual impairments (doctors, architects, designers, teachers, therapists, etc.) Planning for universal design requires knowledge of the visual impairment challenges, and the eye disease simulator is a practical tool that can help designers understand the fundamental visual tasks.
BibTeX:
@conference{Moan2012,
  author = {Steven Le Moan and Heidi Sarheim and Jonny Nersveen},
  title = {Eye Disease Simulator – how do we see the world when vision is failing?},
  booktitle = {Universal Design},
  address = {Oslo, Norway},
  month = {June},
  year = {2012}
}
BibTeX:
@mastersthesis{Moreno2011,
  author = {Alejandro Moreno},
  title = {Classification of Teachers Actions in Lecture Videos},
  school = {Gj{\o}vik University College},
  year = {2011}
}
Abstract: In this paper we present an automatic color correction framework based on memory colors. Memory colors for 3 different objects: grass, snow and sky are obtained using psychophysical experiments under different illumination levels and later modeled statistically. While supervised image segmentation method detects memory color objects, a luminance level predictor classifies images as dark, dim or bright. This information along with the best memory color model that fits to the data is used to do the color correction using a novel weighted Von Kries formula. Finally, a visual experiment is conducted to evaluate color corrected images. Experimental results suggest that the proposed weighted von Kries model is an appropriate color correction model for natural images.
BibTeX:
@inproceedings{AlejandroMoreno2009,
  author = {Alejandro Moreno and Basura Fernando and Bismillah Kani and Sajib Saha and Sezer Karaoglu},
  title = {Color Correction: A Novel Weighted Von Kries Model Based on Memory Colors},
  booktitle = {CCIW2011},
  address = {Milan, Italy},
  month = {April},
  publisher = {Springer},
  year = {2011},
  series = {Lecture Notes in Computer Science},
  volume = {6626},
  pages = {165--175},
  url = {http://www.springerlink.com/content/y012h51g23368524/}
}
Abstract: This study aims to investigate factors affecting the appearance of print on both opaque and transparent substrates. In particular it looks at factors from five categories: the digital input, the printing system, the print, the illumination under which the print is viewed and the viewing environment in which it is viewed. The key method underlying the work described here relies on identifying a range of factors in these categories and having alternative states for each factor, e.g., the substrate factor can be plain paper,glossy paperor newsprint. A reference state is then defined for each factor and alternative states are compared with the reference one factor at a time. The comparison is in terms of color differences between patches of a test chart obtained in the reference and an alternative state. The results for factors are then viewed both individually and by grouping all factors of a given category together. Finally the results indicate the magnitude of the change that can be expected due to a given factor or category and this makes it possible to order factors in terms of the magnitude of visual difference they can cause when altered. Having such an ordered list is then of use both in improving printing systems and in dealing with customer service queries.
BibTeX:
@article{Morovic2003,
  author = {Jan Morovic and Peter Nussbaum},
  title = {Factors Affecting the Appearance of Print on Opaque and Transparent Substrates},
  month = {Nov/Dec},
  journal = {The Journal of Imaging Science and Technology},
  year = {2003},
  volume = {47},
  number = {6},
  pages = {554-564},
  note = {ISBN / ISSN: 1062-3701}
}
Abstract: As monitors are the only way to see a picture before printing, all the monitors along the graphic chain should give the same representation of this picture (in terms of colour, contrast,...). This is possible only by profiling each monitor. Nowadays the use of instruments to make the profile of a monitor in the graphic industry is something common. Different kinds of instruments are available: colorimeter, spectrophotometer and spectroradiometer. Although they use different technologies, we expect them to give the same results (usually the L*a*b* coordinate) for a same colour. In this thesis we compared different instruments (colorimeters and spectrophotometers) interm of repeatability, reproducibility, precision and accuracy
BibTeX:
@mastersthesis{Moutou2009,
  author = {Clementine Moutou},
  title = {Consequence of using a number of different colour measurement instruments in particular for emission purpose},
  school = {Gj{\o}vik University College and Grenoble Institute of Technology},
  year = {2009},
  url = {http://colorlab.no/content/download/25453/270987/file/Clementine_Moutou_Master_thesis.pdf}
}
BibTeX:
@mastersthesis{MOZEJKO2013,
  author = {Dawid MOZEJKO},
  title = {Image texture, Uniformity, homogenity and radiation dose properties in CT},
  school = {Gj{\o}vik University College},
  year = {2013}
}
BibTeX:
@techreport{Ngo2011,
  author = {Khai Van Ngo and Christopher Andre Dokkeberg and Jehans Jr. Storvik},
  title = {QuickEval},
  year = {2011},
  note = {Bachelor thesis report}
}
BibTeX:
@inproceedings{Ngo2015,
  author = {Khai Van Ngo and Jehans Jr. Storvik and Christopher A. Dokkeberg and Ivar Farup and Marius Pedersen},
  title = {{QuickEval}: A web application for psychometric scaling experiments},
  booktitle = {Image Quality and System Performance XII},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9396},
  pages = {9396-24},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2109944}
}
BibTeX:
@phdthesis{Nussbaum2010,
  author = {Peter Nussbaum},
  title = {Measurement and Print Quality Assessment in a Colour Managed Printing Workflow},
  month = {December},
  school = {University of Oslo and Gj{\o}vik UniversityCollege, Norway},
  year = {2010}
}
BibTeX:
@inproceedings{Nussbaum2008,
  author = {Peter Nussbaum},
  title = {Print Quality Evaluation and Applied Colour Management in Coldset Offset Newspaper Print},
  booktitle = {{TAGA} 60th Annual Technical Conference},
  address = {San Francisco, CA, USA},
  month = {Mar},
  year = {2008}
}
Abstract: This article aims to investigate print quality in newspaper print, by considering the appropriate calibration standard and applying colour management. In particular, this article examines the colorimetric properties of eight Norwegian newspaper printing presses, to evaluate the relevant colour separation approach, either by applying custom separation profiles or by using an industry standard profile. The key method underlying the work described here relies on obtaining colour measurements to determine the repeatability of each participant in terms of colour differences. Furthermore, the variation between the eight newspaper printing presses and the variation according to the colorimetric values of the ISO 12647-3 standard are important parts of the quantitative evaluation. Based on the colour measurements two custom ICC profiles were generated and an industry standard profile “ISOnewspaper26v4.icc” was also used. The first custom profile was generated using averaged colour measurement data set from a test print run, and the second using a data set averaged between measured data and the characterization data set “IFRA26.txt” provided by IFRA. These three profiles were applied to four test images, which were then printed by the eight newspaper printing presses. A psychophysical experiment was carried out to determine the “pleasantness” of the reproductions, which were produced using the three profiles. The results of the study show the performance of the appropriate profile, which is applied to the eight newspaper printing presses to obtain significant best print quality. Eventually the results demonstrate the fact that the print variations in colours between the eight printing presses are larger thanthe difference between the custom and the standard profiles. Hence, the print variations and not the profile selection may have determined the visual print quality. Therefore the study reveals the importance of adopting international standards and methods instead of using insufficiently defined house standards to preserve equal results among different newspaper printing presses.
BibTeX:
@article{Nussbaum2011a,
  author = {Peter Nussbaum and Jon Y. Hardeberg},
  title = {Print Quality Evaluation and Applied Colour Management in Coldset Offset Newspaper Print},
  month = {April},
  journal = {Color Research and Application},
  year = {2012},
  volume = {37},
  number = {2},
  pages = {82--91},
  url = {http://onlinelibrary.wiley.com/doi/10.1002/col.20674/abstract},
  doi = {10.1002/col.20674}
}
BibTeX:
@inproceedings{Nussbaum2006,
  author = {Peter Nussbaum and Jon Yngve Hardeberg},
  title = {Print quality evaluation and applied colour management in heat-set web offset},
  booktitle = {IARIGAI Conference},
  address = {Leipzig},
  month = {Sep},
  year = {2006}
}
Abstract: In the context of print quality and process control colorimetric parameters and tolerance values are clearly defined. Calibration procedures are well defined for color measurement instruments in printing workflows. Still, using more than one color measurement instrument measuring the same color wedge can produce clearly different results due to random and systematic errors of the instruments. In certain situations where one instrument gives values which are just inside the given tolerances and another measurement instrument produces values which exceed the predefined tolerance parameters, the question arises whether the print or proof is approved or not accepted with regards to the standard parameters. The aim of this paper was to determine an appropriate model to characterize color measurement instruments for printing applications in order to improve the colorimetric performance and hence the inter-instrument agreement. The method proposed is derived from color image acquisition device characterization methods which have been applied by performing polynomial regression with a least square technique. Six commercial color measurement instruments were used for measuring color patches of a control color wedge on three different types of paper substrates. The characterization functions were derived using least square polynomial regression, based on the training set of 14 BCRA tiles colorimetric reference values and the corresponding colorimetric measurements obtained by the measurement instruments. The derived functions were then used to correct the colorimetric values of test sets of 46 measurements of the color control wedge patches. The corrected measurement results obtained from the applied regression model was then used as the starting point with which the corrected measurements from other instruments were compared to find the most appropriate polynomial, which results in the least color difference. The obtained results demonstrate that the proposed regression method works remarkably well with a range of different color measurement instruments used on three types of substrates. Finally, by extending the training set from 14 samples to 38 samples the obtained results clearly indicate that the model is robust.
BibTeX:
@inproceedings{Nussbaum2011b,
  author = {Peter Nussbaum and Jon Yngve Hardeberg and Fritz Albregtsen},
  title = {Regression based characterization of color measurement instruments in printing applications},
  booktitle = {Color Imaging XVI: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2011},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {7866},
  pages = {78661R},
  url = {http://www.colorlab.no/content/download/30763/366707/file/Nussbaum2011Poster.pdf}
}
BibTeX:
@article{Nussbaum2006a,
  author = {Peter Nussbaum and Jon Yngve Hardeberg and Svein Erik Skarsb{\o}},
  title = {Print quality evaluation for governmental purchase decisions},
  journal = {Advances in Printing Science and Technology},
  year = {2006},
  volume = {31},
  pages = {189-200},
  note = {ISBN 953-96276-9-9}
}
BibTeX:
@article{Nussbaum2011,
  author = {Peter Nussbaum and Aditya Sole and Jon Y. Hardeberg},
  title = {Analysis of color measurement uncertainty in a color managed printing workflow},
  journal = {Journal of Print and Media Technology Research},
  year = {2011}
}
BibTeX:
@mastersthesis{Ochoa2012a,
  author = {Victor Manuel Torres Ochoa},
  title = {Adult video content detection using Machine Learning Techniques},
  school = {Gj{\o}vik University College},
  year = {2012}
}
BibTeX:
@inproceedings{Ochoa2012,
  author = {Victor M. Torres Ochoa and Sule Yildirim Yayilgan and Faouzi Alaya Cheikh},
  title = {Adult video content detection using Machine Learning Techniques},
  booktitle = {Eighth International Conference on Signal Image Technology and Internet Based Systems},
  address = {Sorrento, Naples, Italy},
  month = {Nov},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {967--974}
}
BibTeX:
@misc{Omer2013,
  author = {Niyan Omer},
  title = {FOGSCREEN - A new generation of display},
  school = {Gj{\o}vik University College},
  year = {2013},
  note = {Bachelor thesis}
}
BibTeX:
@inproceedings{Oncu2012,
  author = {Alexandra Ioana Oncu and Ferdinand Deger and Jon Yngve Hardeberg},
  title = {Evaluation of Digital Inpainting Quality in the Context of Artwork Restoration},
  booktitle = {12th European Conference on Computer Vision (ECCV)},
  month = {October},
  year = {2012}
}
BibTeX:
@inproceedings{Ouglov2006,
  author = {Andrei Ouglov and Ali Alsam and Rune Hjelsvold},
  title = {Gamut Intersection for Image Retrieval},
  booktitle = {CGIV 2006 -- Third European Conference on Color in Graphics, Imaging and Vision},
  address = {Leeds},
  month = {Jun},
  year = {2006}
}
BibTeX:
@mastersthesis{Panak2007a,
  author = {Panak, Ondrej},
  title = {Color matching under soft-proofing conditions},
  school = {Gj{\o}vik University College and University of Pardubice},
  year = {2007}
}
Abstract: A color memory experiment with 5 colors (red, green, blue, yellow, and Caucasian skin color) was carried out. The color patches, shown on an LCD monitor, was memorized under a given viewing condition. The mixing of the memory color was then done first under the same viewing condition, and subsequently under other two altered viewing conditions. The conditions were different in the background and surround parameters. The color appearance model CIECAM02 was then used to predict color attributes under the altered viewing conditions. The lowest color memory shift in hue attribute was found for the red color. CIECAM02 seemed to have some limitation in colorfulness and chroma attribute prediction, for colors viewed on a black background. The result show, that the color attributes prediction in color memory experiment was not successful.
BibTeX:
@inproceedings{Panak2007,
  author = {Ondrej Panak and Peter Nussbaum and Jon Yngve Hardeberg and Marie Kaplanova},
  title = {Colour Memory Match Under Disparate Viewing Conditions},
  booktitle = {IS\&T and SID's 15th Color Imaging Conference},
  year = {2007},
  pages = {325-330}
}
BibTeX:
@inproceedings{Pant2009,
  author = {Dibakar Raj Pant},
  title = {Least-Square Technique for Color Reproduction of Semi-Transparent Material},
  booktitle = {Proceedings from Gj{\o}vik Color Imaging Symposium 2009},
  address = {Gj{\o}vik, Norway},
  month = {Jun},
  year = {2009},
  series = {H{\o}gskolen i Gj{\o}viks rapportserie},
  number = {4},
  pages = {70-76},
  url = {http://brage.bibsys.no/hig/bitstream/URN:NBN:no-bibsys_brage_9313/3/sammensatt_elektronisk.pdf}
}
BibTeX:
@phdthesis{Pant2012a,
  author = {Dibakar Raj Pant},
  title = {Line Element and Variational Methods for Color Difference Metrics},
  month = {Feb},
  school = {University Jean Monnet, France in collaboration with Gj{\o}vik University College, Norway},
  year = {2012}
}
Abstract: Riemannian metric tensors of color difference formulas are derived from the line elements in a color space. The shortest curve between two points in a color space can be calculated from the metric tensors. This shortest curve is called a geodesic. In this article, the authors present computed geodesic curves and corresponding contours of the CIELAB (?E ), the CIELUV (?E ), the OSA-UCS (?EE) and an infinitesimal approximation of the CIEDE2000 (?E00) color difference metrics in the CIELAB color space. At a fixed value of lightness L*, geodesic curves originating from the achromatic point and their corresponding contours of the above four formulas in the CIELAB color space can be described as hue geodesics and chroma contours. The Munsell chromas and hue circles at the Munsell values 3, 5, and 7 are compared with computed hue geodesics and chroma contours of these formulas at three different fixed lightness values. It is found that the Munsell chromas and hue circles do not the match the computed hue geodesics and chroma contours of above mentioned formulas at different Munsell values. The results also show that the distribution of color stimuli predicted by the infinitesimal approximation of CIEDE2000 (?E00) and the OSA-UCS (?EE) in the CIELAB color space are in general not better than the conventional CIELAB (?E) and CIELUV (?E) formulas
BibTeX:
@article{Pant2012,
  author = {Dibakar Raj Pant and Ivar Farup},
  title = {Geodesic calculation of color difference formulas and comparison with the munsell color order system},
  month = {February},
  journal = {Color Research \& Application},
  year = {2012},
  url = {http://onlinelibrary.wiley.com/doi/10.1002/col.20751/full},
  doi = {10.1002/col.20751}
}
Abstract: Study of various color difference formulas by the Riemannian approach is useful. By this approach, it is possible to evaluate the performance of various color difference formulas having different color spaces for measuring visual color difference. In this article, the authors present mathematical formulations of CIELAB ( DEab), CIELUV ( DEuv), OSA-UCS ( DEE) and infinitesimal approximation of CIEDE2000 (DE00) as Riemannian metric tensors in a color space. It is shown how such metrics are transformed in other color spaces by means of Jacobian matrices. The coefficients of such metrics give equi-distance ellipsoids in three dimensions and ellipses in two dimensions. A method is also proposed for comparing the similarity between a pair of ellipses. The technique works by calculating the ratio of the area of intersection and the area of union of a pair of ellipses. The performance of these four color difference formulas is evaluated by comparing computed ellipses with experimentally observed ellipses in the xy chromaticity diagram. The result shows that there is no significant difference between the Riemannized DE00 and the DEE at small color difference, but they are both notably better than DEab and DEuv.
BibTeX:
@article{Pant2011,
  author = {Dibakar Raj Pant and Ivar Farup},
  title = {Riemannian formulation and comparison of color difference formulas},
  month = {September},
  journal = {Color Research \& Application},
  year = {2011},
  url = {http://onlinelibrary.wiley.com/doi/10.1002/col.20710/full},
  doi = {10.1002/col.20710}
}
Abstract: The CIE recommended uniform chromaticity scale (UCS) diagram based on the CIELUV is used to evaluate the nonEuclidean approximate form of CIEDE2000 and the Euclidean ?EE colour difference formulas for measuring the visual data. Experimentally observed visual colour difference data in terms of supra threshold ellipses are plotted in the CIELUV u'v' diagram. Similarly, equi-distance ellipses of two formulas are computed and plotted in the same diagram. Performance of these formulas are evaluated by calculating the matching ratio between observed and computed ellipse pairs. Various statistical tests are done for these ratio values. It is found that there is no signi?cant difference between the complex non-Euclidean approximate form of ?E00 and the simple Euclidean ?EE.
BibTeX:
@inproceedings{Pant2011a,
  author = {Pant, Dibakar R. and Farup, Ivar},
  title = {CIE uniform chromaticity scale diagram for measuring performance of OSA-UCS DEE and CIEDE00 formulas},
  booktitle = {Visual Information Processing (EUVIP), 2011 3rd European Workshop on},
  month = {July},
  year = {2011},
  pages = {18--23},
  keywords = {CIE uniform chromaticity scale diagram;CIEDE00 formula;CIELUV diagram;Euclidean colour difference formulas;OSA-UCS #x0394;EE formula;UCS diagram;equidistance ellipse;matching ratio;nonEuclidean CIEDE2000 approximation;performance measurement;statistical tests;supra threshold ellipses;visual colour difference data;approximation theory;colorimetry;statistical testing;},
  doi = {10.1109/EuVIP.2011.6045520}
}
Abstract: For precision color matching, visual sensitivity to small color difference is an essential factor. Small color differences can be measured by the just noticeable difference (JND) ellipses. The points on the ellipse represent colours that are just noticably different from the colour of the centre point. Mathematically, such an ellipse can be described by a positive definite quadratic
differential form, which is also known as the Riemannian metric. In this paper, we propose a method which makes use of the Riemannian metric and Jacobean transformations to transform JND ellipses between different colour spaces. As an example,
we compute the JND ellipses of the CIELAB and CIELUV color difference formulae in the xy chromaticity diagram. We also propose a measure for comparing the similarity of a pair of ellipses and use that measure to compare the CIELAB and CIELUV ellipses to two previously established experimental sets of ellipses. The proposed measure takes into account the size, shape and orientation. The technique works by calculating the ratio of the area of the intersection and the area of the union of a pair of ellipses. The method developed can in principle be applied for comparing the performance of any color difference formula and experimentally obtained sets of colour discrimination ellipses.
BibTeX:
@inproceedings{Pant2010,
  author = {Dibakar Raj Pant and Ivar Farup},
  title = {Evaluating Color Difference Formulae by Riemannian Metric},
  booktitle = {5th European Conference on Colour in Graphics, Imaging, and Vision (CGIV)},
  address = {Joensuu, Finland},
  month = {June},
  year = {2010},
  pages = {497--503},
  url = {http://colorlab.no/content/download/30169/361106/file/Pant2010Poster.pdf}
}
Abstract: The CIELAB based CIEDE2000 colour difference formula to measure small to medium colour differences is the latest standard formula of today which incorporates different corrections for the non uniformity of CIELAB space. It also takes account of parametric factors. In this paper, we present a mathematical formulation of the CIEDE2000 by the line element to derive a Riemannian metric tensor in a color space. The coefficients of this metric give Just Noticeable Difference (JND) ellipsoids in three dimensions and ellipses in two dimensions. We also show how this metric can be transformed between various colour spaces by means of the Jacobian matrix. Finally, the CIEDE2000 JND ellipses are plotted into the xy chromaticity diagram and compared to the observed BFD-P colour matching ellipses by a comparing method described in Pant and Farup (CGIV2010).
BibTeX:
@inproceedings{Pant2010a,
  author = {Dibakar Raj Pant and Ivar Farup},
  title = {Riemannian Formulation of the CIEDE2000 Color Difference Formula},
  booktitle = {Color Imaging Conference},
  address = {San Antonio, TX, USA},
  month = {Nov},
  publisher = {IS\&T},
  year = {2010}
}
BibTeX:
@inproceedings{Pant2013,
  author = {Dibakar Raj Pant and Ivar Farup and Manuel Melgosa},
  title = {Analysis of Three Euclidean Color-Difference Formulas for Predicting the Average RIT-dupont Color-Difference Ellipsoids},
  booktitle = {12th Congress of the International Colour Association (AIC)},
  address = {Newcastle, UK},
  month = {July},
  year = {2013}
}
BibTeX:
@mastersthesis{Paul2007,
  author = {Steffen Paul},
  title = {Color management in digital intermediate movie production},
  school = {Gj{\o}vik University College \& Mittweida University of Applied Sciences},
  year = {2007}
}
BibTeX:
@inproceedings{Pedersen2009a,
  author = {Marius Pedersen},
  title = {Full-Reference Image Quality Metrics and Still Not Good Enough?},
  booktitle = {Proceedings from Gj{\o}vik Color Imaging Symposium 2009},
  address = {Gj{\o}vik, Norway},
  month = {Jun},
  year = {2009},
  series = {H{\o}gskolen i Gj{\o}viks rapportserie},
  number = {4},
  pages = {4},
  url = {http://brage.bibsys.no/hig/bitstream/URN:NBN:no-bibsys_brage_9313/3/sammensatt_elektronisk.pdf}
}
Abstract: Many image quality and image difference metrics have been proposed over the last decades. An important factor when evaluating the image quality or image difference is the viewing distance. In this paper we propose a new image difference metric based on the simulation of detail visibility
and total variation. The simulation of detail visibility by using shearlets takes into account the viewing conditions and the viewing distance, and calculation of the image difference is done by total variation. Evaluation has been carried out to verify the simulation of image detail visibility,
and it is showing promising results. Evaluation of the new image difference metric is also promising.
BibTeX:
@inproceedings{Pedersen2014,
  author = {Marius Pedersen},
  title = {An Image Difference Metric based on Simulation of Image Detail Visibility and Total Variation},
  booktitle = {The 22nd Color and Imaging Conference (CIC)},
  address = {Boston, MA, USA},
  month = {Nov},
  publisher = {IS\&T},
  year = {2014},
  pages = {37--42},
  url = {http://www.ingentaconnect.com/content/ist/cic/2014/00002014/00002014/art00005}
}
BibTeX:
@phdthesis{Pedersen2011d,
  author = {Marius Pedersen},
  title = {Image quality metrics for the evaluation of printing workflows},
  month = {Oct},
  school = {University of Oslo and Gj{\o}vik University College},
  year = {2011}
}
Abstract: Many image difference metrics have been developed in the last 4 decades. All of these metrics are constructed to predict perceived image difference, but none have been successful. When we rate image difference we look at different areas in the image, based on the difference in these areas we make a decision of the perceived difference. Information about what draws attention and how we examine images can be used to improve image difference metrics.
This research project investigates the importance of region-of-interest on image difference metrics. Region-of-interest has been extracted by using an eye tracker, but also by manual marking by the observers. 3 different tasks were performed by the observers while their gaze position was recorded. Further a manual marking of region-of-interest together with a questionnaire to map background knowledge was carried out. The information found on how we perceive and examine images has been applied to different image difference metrics, such as deltaEab, S-CIELAB, iCAM, SSIM and the hue angle algorithm. The issues regarding how observers look at images given different tasks are also discussed and analyzed.
The results indicate that region-of-interest improves image difference metrics, especially when the metrics already have a low performance in term of linear correlation between perceived and calculated difference. There are no clear evident that one type of region-of-interest outperform other types. The improvement in performance is therefore both scene and metric dependent.
Results also show that observers have different areas of attention according the task given to them, as freeview, rating image difference and marking important regions. The common denominator within every task is faces, and this is clearly important in all tasks for the observers. Within areas of attention will change whether the observer is an expert or non-expert.
BibTeX:
@mastersthesis{Pedersen2007,
  author = {Marius Pedersen},
  title = {Importance of region-of-interest on image difference metrics},
  school = {Gj{\o}vik University College},
  year = {2007},
  url = {http://www.colorlab.no/content/download/21937/215656/file/Marius_Pedersen_Master_thesis.pdf}
}
BibTeX:
@inproceedings{Pedersen2009b,
  author = {Marius Pedersen and Fritz Albregtsen and Jon Y. Hardeberg},
  title = {Detection of worms in error diffusion halftoning},
  booktitle = {Image Quality and System Performance VI},
  address = {San Jose, CA, USA},
  month = {Jan},
  year = {2009},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {7242}
}
BibTeX:
@inproceedings{Pedersen2010,
  author = {Marius Pedersen and Seyed Ali Amirshahi},
  title = {Framework for the Evaluation of Color Prints Using Image Quality Metrics},
  booktitle = {5th European Conference on Colour in Graphics, Imaging, and Vision (CGIV)},
  address = {Joensuu, Finland},
  month = {June},
  year = {2010},
  pages = {75--82}
}
BibTeX:
@conference{Pedersen2011c,
  author = {Marius Pedersen and Arne Magnus Bakke},
  title = {Seam carving for multi-projector displays},
  booktitle = {International conference on Pervasive Computing, Signal Processing and Applications},
  address = {Gj\{o}vik, Norway},
  month = {September},
  year = {2011}
}
BibTeX:
@conference{Marius2009,
  author = {Marius Pedersen and Nicolas Bonnier and Fritz Albregtsen and Jon Yngve Hardeberg},
  title = {Towards a New Image Quality Model for Color Prints},
  booktitle = {ICC Digital Print Day},
  month = {Mar},
  year = {2009},
  url = {http://www.color.org/DigitalPrint/ICCDigitalPrint_presentations.pdf}
}
Abstract: Image quality metrics have become more and more popular in the image processing community. However, so far, no one has been able to define an image quality metric well correlated with the percept for overall image quality. One of the causes is that image quality is multi-dimensional and complex. One approach to bridge the gap between perceived and calculated image quality is to reduce the complexity of image quality, by breaking the overall quality into a set of quality attributes. In our research we have presented a set of quality attributes built on existing attributes from the literature. The six proposed quality attributes are: sharpness, color, lightness, artifacts, contrast, and physical. This set keeps the dimensionality to a minimum. An experiment validated the quality attributes as suitable for image quality evaluation.

The process of applying image quality metrics to printed images is not straightforward, because image quality metrics require a digital input. A framework has been developed for this process, which includes scanning the print to get a digital copy, image registration, and the application of image quality metrics. With quality attributes for the evaluation of image quality and a framework for applying image quality metrics, a selection of suitable image quality metrics for the different quality attributes has been carried out. Each of the quality attributes has been investigated, and an experimental analysis carried out to find the most suitable image quality metrics for the given quality attributes. For the many attributes metrics based on structural similarity are the the most suitable, while for other attributes further evaluation is required.

BibTeX:
@inproceedings{Pedersen2011,
  author = {Marius Pedersen and Nicolas Bonnier and Jon Y. Hardeberg and Fritz Albregtsen},
  title = {Image quality metrics for the evaluation of print quality},
  booktitle = {Image Quality and System Performance},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2011},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  url = {http://colorlab.no/content/download/30728/366341/file/ISQP2011.pdf}
}
BibTeX:
@article{Pedersen2010a,
  author = {Marius Pedersen and Nicolas Bonnier and Jon Yngve Hardeberg and Fritz Albregtsen},
  title = {Attributes of Image Quality for Color Prints},
  month = {Jan},
  journal = {Journal of Electronic Imaging},
  year = {2010},
  volume = {19},
  number = {1},
  pages = {011016-1 -- 011016-13},
  url = {http://spiedl.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JEIME5000019000001011016000001&idtype=cvips&gifs=Yes&ver=dl&type=ALERT}
}
BibTeX:
@inproceedings{Pedersen2010b,
  author = {Marius Pedersen and Nicolas Bonnier and Jon Yngve Hardeberg and Fritz Albregtsen},
  title = {Validation of Quality Attributes for Evaluation of Color Prints},
  booktitle = {Color and Imaging Conference},
  address = {San Antonio, TX},
  month = {Nov},
  publisher = {IS\&T and SID},
  year = {2010},
  pages = {74--79},
  url = {http://www.colorlab.no/content/download/29992/360170/file/Pedersen2010a_poster.pdf}
}
BibTeX:
@inproceedings{Pedersen2010c,
  author = {Marius Pedersen and Nicolas Bonnier and Jon Y. Hardeberg and Fritz Albregtsen},
  title = {Estimating Print Quality Attributes by Image Quality Metrics},
  booktitle = {Color and Imaging Conference},
  address = {San Antonio, TX},
  month = {Nov},
  publisher = {IS\&T and SID},
  year = {2010},
  pages = {68--73},
  url = {http://www.colorlab.no/content/download/29992/360170/file/Pedersen2010a_poster.pdf}
}
BibTeX:
@inproceedings{Pedersen2009c,
  author = {Marius Pedersen and Nicolas Bonnier and Jon Y. Hardeberg and Fritz Albregtsen},
  title = {Attributes of a New Image Quality Model for Color Prints},
  booktitle = {17th Color Imaging Conference},
  address = {Albuquerque, NM, USA},
  month = {Nov},
  year = {2009},
  pages = {204--209},
  url = {http://colorlab.no/content/download/26878/303515/file/Pedersen2009c_Poster.pdf}
}
BibTeX:
@inproceedings{Pedersen2012a,
  author = {Marius Pedersen and Ivar Farup},
  title = {Simulation of Image Detail Visibility using Contrast Sensitivity Functions andWavelets},
  booktitle = {Color and Imaging Conference},
  address = {Los Angeles, CA, USA},
  month = {Nov},
  publisher = {IS\&T and SID},
  year = {2012},
  pages = {70--75}
}
BibTeX:
@inproceedings{Pedersen2009e,
  author = {Marius Pedersen and Jon Yngve Hardeberg},
  title = {A new spatial hue angle metric for perceptual image difference},
  booktitle = {Second International Workshop Computational Color Imaging (CCIW09)},
  address = {Saint-Etienne, France},
  month = {Mar},
  publisher = {Springer},
  year = {2009},
  series = {Lecture Notes in Computer Science},
  volume = {5646},
  pages = {81--90},
  url = {http://www.springerlink.com/link.asp?id=105633}
}
Abstract: The wide variety of distortions that images are subject to during acquisition, processing, storage, and reproduction can degrade their perceived quality. Since subjective evaluation is time-consuming, expensive, and resource-intensive, objective methods of evaluation have been proposed. One type of these methods, image quality (IQ) metrics, have become very popular and new metrics are proposed continuously. This paper aims to give a survey of one class of metrics, full-reference IQ metrics. First, these IQ metrics were classified into different groups. Second, further IQ metrics from each group were selected and evaluated against six state-of-the-art IQ databases.
BibTeX:
@article{Pedersen2012,
  author = {Pedersen, Marius and Hardeberg, Jon Yngve},
  title = {Full-Reference Image Quality Metrics: Classification and Evaluation},
  journal = {Foundations and Trends in Computer Graphics and Vision},
  year = {2012},
  volume = {7},
  number = {1},
  pages = {1--80},
  url = {http://www.nowpublishers.com/product.aspx?product=CGV&doi=0600000037}
}
Abstract: Color image difference metrics have been proposed to find differences between an original image and a reproduction. One of these metrics is the hue angle algorithm proposed by Hong and Luo in 2002. This metric does not take into account the spatial properties of the human visual system, and it could therefore miscalculate the differences between the original and the reproduction. In this article we propose a new color image difference metric based on the hue angle algorithm that takes into account the spatial properties of the human visual system. The proposed metric, the Spatial Hue Angle Metric, has been subjected to extensive testing. The results show improvement in performance compared to the original metric proposed by Hong and Luo, and improvement over or similar performance to traditional metrics, such as the Structural Similarity Metric and Spatial-CIELAB.
BibTeX:
@article{Pedersen2012b,
  author = {Marius Pedersen and Jon Yngve Hardeberg},
  title = {A New Spatial Filtering Based Image Difference Metric Based on Hue Angle Weighting},
  month = {September},
  journal = {Journal of Imaging Science and Technology},
  year = {2012},
  volume = {56},
  pages = {50501-1-50501-12(12)},
  url = {http://www.ingentaconnect.com/search/article?option1=tka&value1=A+New+Spatial+Filtering+Based+Image+Difference+Metric+Based+on+Hue+Angle+Weighting&pageSize=10&index=1}
}
BibTeX:
@conference{Pedersen2009,
  author = {Marius Pedersen and Jon Yngve Hardeberg},
  title = {SHAME: A new spatial hue angle metric for perceptual image difference},
  booktitle = {Vision Sciences Society 9th Annual Meeting},
  address = {Naples, Florida},
  month = {May},
  year = {2009},
  note = {Vision Sciences Society},
  url = {http://www.colorlab.no/content/download/25170/268394/file/SHAME_Poster_compressed.pdf}
}
BibTeX:
@techreport{Pedersen2009d,
  author = {Marius Pedersen and Jon Yngve Hardeberg},
  title = {Survey of full-reference image quality metrics},
  address = {Gj{\o}vik, Norway},
  month = {June},
  year = {2009},
  number = {5},
  note = {ISSN: 1890-520X},
  url = {http://brage.bibsys.no/hig/bitstream/URN:NBN:no-bibsys_brage_9330/1/rapport052009_elektroniskversjon.pdf}
}
BibTeX:
@inproceedings{Pedersen2008b,
  author = {Marius Pedersen and Jon Yngve Hardeberg},
  title = {Rank Order and Image Difference Metrics},
  booktitle = {CGIV 2008 Fourth European Conference on Color in Graphics, Imaging and Vision},
  address = {Terrassa, Spain},
  month = {Jun},
  publisher = {IS\&T},
  year = {2008},
  pages = {120-125}
}
Abstract: We have used image difference metrics to measure the quality of a set of images to know how well they predict perceived image difference. We carried out a psychophysical experiment with 25 observers along with an recording of the observers gaze position. The image difference metrics used were CIELAB deltaEab, S-CIELAB, the hue angle algorithm, iCAM and SSIM. A frequency map from the eye tracker data was applied as a weighting to the image difference metrics. The results indicate an improvement in correlation between the predicted image difference and the perceived image difference.
BibTeX:
@inproceedings{Pedersen2008,
  author = {Marius Pedersen and Jon Yngve Hardeberg and Peter Nussbaum},
  title = {Using gaze information to improve image difference metrics},
  booktitle = {Human Vision and Electronic Imaging VIII (HVEI-08)},
  address = {San Jose, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2008},
  series = {SPIE proceedings},
  volume = {6806},
  pages = {680611-1--680611-12},
  keywords = {Image difference metrics, Eye tracking, CIELAB deltaEab, S-CIELAB, SSIM, Hue angle, iCAM.}
}
BibTeX:
@conference{Pedersen2008c,
  author = {Marius Pedersen and Jon Yngve Hardeberg and Peter Nussbaum},
  title = {Using gaze information to improve image difference metrics},
  booktitle = {Scandinavian Workshop on Applied Eye-tracking},
  address = {Lund, Sweden},
  month = {Apr},
  year = {2008}
}
BibTeX:
@inproceedings{Pedersen2013,
  author = {Marius Pedersen and Xinwei Liu and Ivar Farup},
  title = {Improved Simulation of Image Detail Visibility Using the Non-Subsampled Contourlet Transform},
  booktitle = {The 21st Color and Imaging Conference (CIC)},
  address = {Albuquerque, NM, USA},
  month = {Nov},
  publisher = {IS\&T and SID},
  year = {2013},
  pages = {191--196}
}
BibTeX:
@inproceedings{Pedersen2008a,
  author = {Marius Pedersen and Alessandro Rizzi and Jon Yngve Hardeberg and Gabriele Simone},
  title = {Evaluation of contrast measures in relation to observers perceived contrast},
  booktitle = {CGIV 2008 - Fourth European Conference on Color in Graphics, Imaging and Vision},
  address = {Terrassa, Spain},
  month = {Jun},
  publisher = {IS\&T},
  year = {2008},
  pages = {253-256}
}
BibTeX:
@conference{Pedersen2011b,
  author = {Marius Pedersen and G. Simone and M. Gong and I. Farup},
  title = {A total variation based color image quality metric with perceptual contrast filtering},
  booktitle = {International conference on Pervasive Computing, Signal Processing and Applications},
  address = {Gj{\o}vik, Norway},
  month = {September},
  year = {2011}
}
BibTeX:
@inproceedings{Pedersen2011a,
  author = {Marius Pedersen and Yuanlin Zheng and Jon Yngve Hardeberg},
  title = {Evaluation of Image Quality Metrics for Color Prints},
  booktitle = {Image Analysis},
  publisher = {Springer},
  year = {2011},
  series = {Lecture Notes in Computer Science},
  volume = {6688},
  pages = {317--326},
  url = {http://colorlab.no/content/download/32395/381253/file/Pedersen2011aJonPresentation.pdf}
}
BibTeX:
@mastersthesis{Rahadianti2012,
  author = {Laksmita Rahadianti},
  title = {Automatic Semantic Annotation for Media Learning Objects},
  school = {Gj{\o}vik University College},
  year = {2012}
}
BibTeX:
@mastersthesis{RAJA2013,
  author = {Kiran BYLAPPA RAJA},
  title = {Biometric applications using light-field camera},
  school = {Gj{\o}vik University College},
  year = {2013}
}
Abstract: Detecting artifacts introduced by gamut mapping algorithms
is necessary to ensure the quality of color image reproduction.
Machine based detection of artifacts shall reduce the
tedious work of visual inspection. In this work, we propose
to use contrast information to detect the artifacts introduced
due to the process of gamut mapping. We further evaluate the
proposed algorithm on a set of gamut mapped images and analyze
the results. The results are validated against the existing
benchmarks.
BibTeX:
@conference{Raja2013a,
  author = {Kiran B Raja and Marius Pedersen},
  title = {Artifact detection in gamut mapped images using saliency},
  booktitle = {Colour and Visual Computing Symposium (CVCS)},
  month = {Sept},
  publisher = {IEEE},
  year = {2013}
}
Abstract: Iris is one of the preferred biometric modalities. Nevertheless,
the focus of iris image has to be good enough
to achieve good recognition performance. Traditional iris
imaging devices in the visible spectrum suffer from limited
depth-of-field which results in out-of-focus iris images. The
acquisition of iris image is thus repeated until a satisfactory
focus is obtained or the image is post-processed to improve
the visibility of texture pattern. Bad focused images obtained
due to non-optimal focus degrade the identification rate.
In this work, we propose a novel scheme to capture high
quality iris samples by exploring new sensors based on
light-field technology to address the limited depth-of-field
exhibited by the conventional iris sensors. The idea stems
out from the availability of multiple depth/focus images
in a single exposure. We propose to use the best-focused
iris image from the set of depth images rendered by the
Light-field Camera (LFC). We further evaluate the proposed
scheme experimentally with a unique and newly acquired
iris database simulating the real-life scenario.
BibTeX:
@conference{Raja2013,
  author = {Kiran Bylappa Raja and R. Raghavendra and Faouzi Alaya Cheikh and Christoph Busch},
  title = {Robust iris recognition using light-field camera},
  booktitle = {Colour and Visual Computing Symposium (CVCS)},
  month = {Sept},
  publisher = {IEEE},
  year = {2013}
}
BibTeX:
@inproceedings{Reddy2012,
  author = {Vamsidhar Reddy and Alexander Eichhorn and Jon Yngve Hardeberg and Raju Shrestha},
  title = {A fast method for global depth-map extraction from natural images},
  booktitle = {Proceedings of the 9th European Conference on Visual Media Production (CVMP)},
  year = {2012},
  pages = {59--65},
  url = {http://dl.acm.org/citation.cfm?id=2414696}
}
BibTeX:
@mastersthesis{Renani2007,
  author = {Siavash A. Renani},
  title = {Projection onto a textured wall},
  school = {Gj{\o}vik University College},
  year = {2007},
  url = {http://www.colorlab.no/content/download/21941/215668/file/Siavash_Renani_Master_thesis.pdf}
}
BibTeX:
@inproceedings{Renani2009,
  author = {Siavash Asgari Renani and Masato Tsukada and Jon Yngve Hardeberg},
  title = {Compensating for non-uniform screens in projection display systems},
  booktitle = {Color Imaging XIV: Displaying, Hardcopy, Processing, and Applications},
  address = {San Jose, CA, USA},
  month = {Jan},
  year = {2009},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {7241}
}
BibTeX:
@inproceedings{Rizzi2008,
  author = {Alessandro Rizzi and Gabriele Simone and Roberto Cordone},
  title = {A modified algorithm for perceived contrast in digital images},
  booktitle = {CGIV 2008 - Fourth European Conference on Color in Graphics, Imaging and Vision},
  address = {Terrassa, Spain},
  month = {Jun},
  publisher = {IS\&T},
  year = {2008},
  pages = {249-252}
}
BibTeX:
@inproceedings{Rizzi2009,
  author = {Alessandro Rizzi and Aditya Sole and Peter Nussbaum},
  title = {Colour and Lightness Perception in Low and High Dynamic Range Scenes},
  booktitle = {Proceedings from Gj{\o}vik Color Imaging Symposium 2009},
  address = {Gj{\o}vik, Norway},
  month = {Jun},
  year = {2009},
  series = {H{\o}gskolen i Gj{\o}viks rapportserie},
  number = {4},
  pages = {110-116},
  url = {http://brage.bibsys.no/hig/bitstream/URN:NBN:no-bibsys_brage_9313/3/sammensatt_elektronisk.pdf}
}
BibTeX:
@inproceedings{Roch2007,
  author = {Sylvain Roch and Jon Yngve Hardeberg and Peter Nussbaum},
  title = {Effect of time spacing on the perceived color},
  booktitle = {SPIE Proceedings Color Imaging XII: Processing, Hardcopy, and Applications},
  year = {2007},
  volume = {6493}
}
Abstract: Image Characteristics (ICs) can be described as the parameters or properties, such as lightness,colorfulness or content, that describe an image. These ICs are interesting because they influencethe quality of the image, depending on the processing carried out.
Since the selection of the processing method is dependent on the image which is processed,studying the characteristics of the image could enable to select the most appropriate method,and so also improve the Image Quality. This can further be used to develop a decision tool whichcould simplify the selections a user has to do before processing an image.
The goal of this thesis is to propose a set of the most important ICs, and find methods to measure each of them. To reach this aim, Image Characteristics has been classified into four different categories image attributes, images structure and image type and image restriction.
An uncontrolled experiment has been conducted in order to rate a set of images according to the ICs. This enabled us to compare the ratings from the observers to the methods used to measure the ICs. The aim of this comparison is to study the possibilities to know in advance the classification that observers will give without doing any psychophysical experiment, and so decide for the observers what will be the best processing for him.
The comparison of both results given by ratings and algorithms shows that some ICs (Edges) are harder to rate for the observers and that these very same ICs get lowest correlation between ratings and algorithms. It also shows that for some ICs (Dominant color, Colorfulness) can give a precision sometimes higher than 0,8 (1 is the best).
The last part of the thesis shows how these results could be used with Image Quality metricsto develop decision tool for improving the processing of images.
BibTeX:
@mastersthesis{Royer2010,
  author = {Timoth\acute{e}e Royer},
  title = {Influence of Image Characteristics on Image Quality},
  school = {Ecole Nationale Des Sciences Geographiques and Gj{\o}vik University College},
  year = {2010}
}
Abstract: The aim of the project is to detect fallingpeople in the surveillance videos in nursing home and raise an alarmto notify the concerned person about the event.The task is to develop a system that will monitor people trackingusing multiple cameras. Some work had already been done at university:single camera person tracking. So, need to improve the existing singletracking algorithm and propose a modified version for multiple cameratracking. The fusion of the information from multiple cameras andtheir analysis for fall detection is the main task of this project.
BibTeX:
@mastersthesis{Rudakova2010,
  author = {Victoria Rudakova},
  title = {Probabilistic framework for multi-target tracking using multi-camera: applied to fall detection},
  school = {Gj{\o}vik University College},
  year = {2010},
  url = {http://www.hig.no/content/download/28555/327673/file/Victoria_Rudakova.pdf}
}
Abstract: Biometric systems refer to technologies that measure and analyze human physical characteristics. The most widely used characteristics are extracted from fingerprints, irises and retinas, facial patterns and hand measurements. Iris recognition is regarded as the most reliable biometric of these characteristics. Nowadays, most of the commercial iris-based identification systems use algorithms developed by Daugman. The Daugman advertised recognition rate are excellent. They are however, very likely measure under ideal conditions. Our main goal in this work is to test Daugman and other filtering algorithms proposed in the literature under varying conditions and compare their performances. This document describes the work done for the master thesis during the spring 2007. The thesis focuses on iris-based identification under various conditions. The aim of this project is to examine under which conditions iris recognition is possible, and which filtering algorithm performs best under each unfavorable condition.
The iris recognition process usually consists of four major steps. The first step is to segment the iris out of the image containing the eye and part of the face, which localizes the iris pattern. Step two is the normalization, here the iris pattern will be extracted and scaled to a predefined size. Step tree is the encoding phase, here the details of the iris are filtered, extracted and represented in an iris code. The last step is the comparison, where two iris codes will be compared and a similarity score is computed. In this thesis we have focused on the encoding and filter algorithms, in step tree, under different unfavorable conditions. We used the open source code of Libor Masek and extend it with different filtering algorithms. The filters which were included in the analysis are: two Haar filter, one Log-Gabor filter and one Laplacian of Gaussian filter; which generate iris codes with sizes 702 bit, 87 bit, 9600 bit and 9600 bit respectively. The filtering algorithms have been tested using a database of 500 iris images. The images in the database have been corrupted using different degradation models. We used four different degradation models: additive Gaussian noise and blur, changing the light intensity and rotating the images. Two performance measures were used: the False Acceptance Rate (FAR) and False Rejection Rate (FRR). The first is estimated using inter-class comparisons while the second is estimated using intra-class comparisons. The total number of comparisons performed in the experiments are approximately seven million comparisons.
Based on the experimental results we obtained in this work, we can conclude that the performance of all the tested algorithms is dramatically affected by the degradations. The major cause of the performance drop is the sensitivity of the segmentation process to such degradations. These degradations introduce segmentation errors in many of the images.
The experimental results also show that the Log-Gabor and Laplacian of Gaussian filters are best under optimal conditions. Under non-optimal conditions however, the Haar filter with 702 bit representation achieves close to best result under most of the degradation models. This maybe due to the fact that it extracts the iris characteristics based on fewer but most prominent details in the iris which are relatively more robust to degradations.
BibTeX:
@mastersthesis{Sagbakken2007,
  author = {Hans Christian Sagbakken},
  title = {Irisgjenkjenning under varierende forhold},
  school = {Gj{\o}vik University College},
  year = {2007},
  url = {http://www.hig.no/content/download/9054/122123/file/Sagbakken%20-%20Irisgjenkjenning%20under%20varierende%20forhold.pdf}
}
BibTeX:
@conference{Sdiri2014,
  author = {B. Sdiri and A. Beghdadi and F. Alaya Cheikh},
  title = {A BRIEF OVERVIEW ON SPECULAR REFLECTION REMOVAL TECHNIQUES FOR ENDOSCOPIC IMAGES/VIDEOS},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {Dec},
  year = {2014}
}
Abstract: Successful colour management of projection systems depends on knowledge of their characteristics. In this study, two typical portable projectors have been characterised. The two projectors are based on different technologies, Liquid Crystal Display (LCD) and Digital Light Processing (DLP). Measurements were made with a spectroradiometer.
The LCD projector showed good colour additivity. The luminance difference between the sum of primaries and white was 0.33% after correction of the black level. The corresponding value for the DLP projector was 56%. This is due to a non-filtering segment in the filter wheel.

The inter-channel dependency was calculated. The LCD projector showed good independence. For the DLP projector, the additional segment complicates the interpre-tation of the calculated values.

Measurements of the signal input-output relationship have been made. The LCD projector showed a power function response, while the DLP projector showed an S-shaped response. Neither of these are native responses of the projectors, so this is probably a deliberate design.

The chromaticity changes of primary colours and grey depending on the input signal were measured. The chromaticity constancy was poor for both projectors. It was shown that the relatively high black luminance is the dom-inant reason for this.

The spatial uniformity was surprisingly poor. Measurements revealed uniformities down to 20% and 30% for the DLP and the LCD projector, respectively.

Our tests showed that both the intensity and the colour of the background influenced the displayed colour. The average colour differences were found to be Delta E ab =4.83 for the LCD and Delta E ab =2.94 for the DLP projector.

BibTeX:
@inproceedings{Seime2002,
  author = {Lars Seime and Jon Yngve Hardeberg},
  title = {Characterisation of LCD and DLP Projection Displays},
  booktitle = {Tenth Color Imaging Conference: Color Science and Engineering Systems, Technologies, Applications},
  address = {Scottsdale, Arizona, USA},
  month = {Nov},
  year = {2002},
  pages = {277-282},
  note = {ISBN / ISSN: 0-89208-241-0}
}
Abstract: Under natural viewing conditions humans tend to fixate on specific parts of the image that interests them naturally. Saliency map is the map of regions which are more prominent than other regions in terms of low level image properties such as intensity, color and orientation. With some modifications it can be used to simulate the natural human fixation also known as the gaze map. There are numerous applications in the field of engineering, marketing and art that can benefit from understanding of human visual fixation such as image quality evaluation, label design etc. The objective of this research is to understand the factors that influence the saliency map and gaze map and to modify the saliency map in order to make it similar to the gaze map. Eye movements of 20 test subjects were captured using eye tracking equipment available in the lab. The gaze maps obtained were averaged and superimposed over the corresponding original images. Saliency map toolbox [Walther(2006)] was modified by addition of face detection [Sauquet et al.(2005)Sauquet, Rodriguez & Marcel]. The gaze maps were analyzed and compared with modified saliency maps.
BibTeX:
@mastersthesis{Sharma2008a,
  author = {Sharma, Puneet},
  title = {Perceptual Image Difference Metrics. Saliency Maps \& Eye Tracking},
  school = {Gj{\o}vik University College},
  year = {2008},
  url = {http://www.colorlab.no/content/download/21940/215665/file/Puneet_Sharma_Master_thesis.pdf}
}
BibTeX:
@inproceedings{Sharma2009,
  author = {Puneet Sharma and Faouzi Alaya Cheikh and Jon Yngve Hardeberg},
  title = {Face Saliency in Human Visual Saliency Models},
  booktitle = {Proceedings from Gj{\o}vik Color Imaging Symposium 2009},
  address = {Gj{\o}vik, Norway},
  month = {Jun},
  year = {2009},
  series = {H{\o}gskolen i Gj{\o}viks rapportserie},
  number = {4},
  pages = {12-18},
  url = {http://brage.bibsys.no/hig/bitstream/URN:NBN:no-bibsys_brage_9313/3/sammensatt_elektronisk.pdf}
}
BibTeX:
@conference{Sharma2014,
  author = {Puneet Sharma and Faouzi Alaya Cheikh and Jon Yngve Hardeberg},
  title = {Spatio-temporal analysis of eye fixations data in images},
  booktitle = {IEEE International Conference on Image Processing (ICIP)},
  address = {Paris, France},
  publisher = {IEEE},
  year = {2014}
}
BibTeX:
@conference{Sharma2008,
  author = {Puneet Sharma and Faouzi Alaya Cheikh and Jon Yngve Hardeberg},
  title = {Saliency Map for Human Gaze Prediction in Images},
  booktitle = {Sixteenth Color Imaging Conference},
  address = {Portland, Oregon, USA},
  month = {Nov},
  year = {2008}
}
BibTeX:
@phdthesis{Shrestha2014d,
  author = {Raju Shrestha},
  title = {Multispectral imaging: Fast acquisition, capability extension, and quality evaluation},
  month = {Dec},
  school = {University of Oslo},
  year = {2014},
  note = {PhD thesis},
  url = {http://www.mn.uio.no/ifi/forskning/aktuelt/arrangementer/disputaser/2014/shrestha.html}
}
BibTeX:
@techreport{Shrestha2011c,
  author = {Raju Shrestha},
  title = {Gaze maps for video sequences: use of eye tracker to record the gaze of viewers of video sequences},
  year = {2011},
  number = {4},
  url = {http://brage.bibsys.no/hig/handle/URN:NBN:no-bibsys_brage_26316}
}
BibTeX:
@techreport{Shrestha2011d,
  author = {Raju Shrestha},
  title = {Multispectral Imaging for Biometrics – A Review},
  year = {2011},
  number = {5},
  url = {http://brage.bibsys.no/hig/handle/URN:NBN:no-bibsys_brage_26317}
}
Abstract: Multispectral and 3D imaging are two complimentary imaging technologies with many advantages and great potential for their widespread use in the future, if we can make them faster and more practical. This thesis aims at conceiving such a fast and practical two-in-one multispectral-stereo system.
Multispectral imaging systems remedy the problems of conventional three channel (RGB) color imaging like metamerism and dependency on acquired conditions and at the same time presents high spatial and spectral resolution. A multispectral image is composed of several monochannel images of the same object, each image holds data about a specific wavelength depending on the filter used. The major problems of existing multispectral imaging systems are that they are slow as they require multiple takes and/or they are quite expensive, contributing to the current lack of widespread use in the consumer segment. This thesis will explore creating a fast, practicable and affordable multispectral system with the use of two commercially available digital cameras. Each camera is equipped with an optical filter. These two filters are chosen so that they modify and spread the sensitivities of the cameras so that they become well spaced throughout the visible spectrum covering complementary wave-bands, thus giving rise to six channel multispectral system.
Like multispectral imaging, 3D imaging systems are also gaining popularity and are of great use in imaging fields. It would be great advantage if we can have an integrated system capable of both multispectral and at the same time 3D imaging. This thesis, therefore, aims at this in conceiving such a multispectral-stereo system. The two cameras modified with appropriate filters that forms six channel multispectral system are used in stereoscopic configurations to acquire depth information making it capable of 3D imaging as well. This leads to a faster and practicable and at the same time affordable multispectral-stereo systems.
Such a system could be used for many applications, for example for 3D artwork object acquisition. Knowing the spectral reflectance allows us to simulate the appearance of a 3D object under any virtual illuminant. Moreover, it lets us store this valuable information for future restoration.
BibTeX:
@mastersthesis{Shrestha2010a,
  author = {Raju Shrestha},
  title = {Conceiving a Fast and Practical Multispectral-Stereo System},
  school = {Gj{\o}vik University College},
  year = {2010},
  url = {http://brage.bibsys.no/hig/handle/URN:NBN:no-bibsys_brage_16057}
}
BibTeX:
@incollection{Shrestha2014a,
  author = {Shrestha, Raju and Hardeberg, Jon Yngve},
  title = {How Are LED Illumination Based Multispectral Imaging Systems Influenced by Different Factors?},
  booktitle = {Image and Signal Processing},
  publisher = {Springer International Publishing},
  year = {2014},
  series = {Lecture Notes in Computer Science (LNCS)},
  volume = {8509},
  pages = {61-71},
  keywords = {spectral imaging; light emitting diodes; demosaicing; noise; quality},
  url = {http://dx.doi.org/10.1007/978-3-319-07998-1_8},
  doi = {10.1007/978-3-319-07998-1_8}
}
BibTeX:
@inproceedings{Shrestha2012b,
  author = {Raju Shrestha and Jon Yngve Hardeberg},
  title = {Simultaneous Multispectral Imaging and Illuminant Estimation Using a Stereo Camera},
  booktitle = {Image and Signal Processing},
  month = {June},
  publisher = {Springer},
  year = {2012},
  series = {Lecture Notes in Computer Science (LNCS)},
  volume = {7340},
  pages = {45--55},
  url = {http://www.springerlink.com/content/45q43tm8650777k1/}
}
Abstract: This paper proposes an extension to the CFA based multispectral imaging with an added capability of illuminant estimation. A special filter is used on top of regular R, G and B filters of a camera, replacing one of the two green filters, with one of them. This gives a six channel multispectral image. A normal RGB image is produced by the RGB filters. The corresponding filtered RGB image is obtained using the filtered RGB channels. The two images of a scene allow estimating the illuminant using the chromagenic illuminant estimation algorithm. The proposed system is thus capable of acquiring not only multispectral image but also normal RGB image, and at the same time capable of estimating the illuminant under which the image is captured. This makes the system useful in many applications in color imaging and computer vision. Simulation experiments confirm the effectiveness of the proposed system.
BibTeX:
@inproceedings{Shrestha2013,
  author = {Raju Shrestha and Jon Yngve Hardeberg},
  title = {CFA Based Simultaneous Multispectral Imaging and Illuminant Estimation},
  booktitle = {Computational Color Imaging, CCIW2013 },
  address = {Chiba, Japan},
  month = {March},
  publisher = {Springer-Verlag},
  year = {2013},
  series = {Lecture Notes in Computer Science (LNCS)},
  volume = {7786},
  pages = {158-170},
  url = {http://link.springer.com/chapter/10.1007/978-3-642-36700-7_13}
}
BibTeX:
@inproceedings{Shrestha2015,
  author = {Raju Shrestha and Jon Yngve Hardeberg},
  title = {Multispectral imaging: An application to density measurement of photographic paper in the manufacturing process control},
  booktitle = {Image Processing: Machine Vision Applications VIII},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9405},
  pages = {9405-16},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2191090}
}
BibTeX:
@inproceedings{Shrestha2015a,
  author = {Raju Shrestha and Jon Yngve Hardeberg},
  title = {Quality comparison of multispectral imaging systems based on real experimental data},
  booktitle = {Mid-term meeting of the International Colour Association (AIC)},
  address = {Tokyo, Japan},
  month = {May},
  year = {2015},
  pages = {1266--1271}
}
Abstract: Increasing the number of imaging channels beyond the conventional three has been shown to be beneficial for a wide range of applications. However, it is mostly limited to imaging in a controlled environment, where the capture environment (illuminant) is known a priori. We propose here a novel system and methodology for multispectral imaging in an uncontrolled environment. Two images of a scene, a normal RGB and a filtered RGB are captured. The illuminant under which an image is captured is estimated using a chromagenic based algorithm, and the multispectral system is calibrated automatically using the estimated illuminant. A 6-band multispectral image of a scene is obtained from the two RGB images. The spectral reflectances of the scene are then estimated using an appropriate spectral estimation method. The proposed concept and methodology is generic one, as it is valid in whatever way we acquire the two images of a scene. A system that can acquire two images of a scene can be realized, for instance in two shots using a digital camera and a filter, or in a single shot using a stereo camera, or a custom color filter array design. Simulation experiments using a stereo camera based system confirms the effectiveness of the proposed method. This could be useful in many imaging applications and computer vision.
BibTeX:
@article{Shrestha2014,
  author = {Raju Shrestha and Jon Yngve Hardeberg},
  title = {Spectrogenic imaging: {A} novel approach to multispectral imaging in an uncontrolled environment},
  month = {Apr},
  journal = {Optics Express},
  publisher = {Optical Society of America (OSA)},
  year = {2014},
  volume = {22},
  number = {8},
  pages = {9123--9133},
  url = {http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-22-8-9123},
  doi = {10.1364/OE.22.009123}
}
Abstract: Multispectral imaging, which extends the number of imaging channels beyond the conventional three, has demonstrated to be beneficial for a wide range of applications. Its ability of acquiring images beyond the visible range and applicability in many different application domains lead
to the design and the development of a number of multispectral imaging technologies and systems. Given different systems to choose from, it is important to be able to compare them in a general and in many situations specific to a certain application of interest. In this paper, we evaluate
several conventional and recently proposed multispectral imaging systems, both qualitatively and quantitatively. Both spectral and colorimetric accuracies are used as the criteria in the quantitative evaluation. The systems are evaluated and compared for two specific applications: imaging
of natural scenes and paintings (cultural heritage), as well as for a general spectral imaging solution. This work provides a framework for the evaluation and comparison of different multispectral imaging systems, which we believe, would be very helpful in identifying the most appropriate
technique or system for a given application.
BibTeX:
@inproceedings{Shrestha2014c,
  author = {Raju Shrestha and Jon Y. Hardeberg},
  title = {Evaluation and Comparison of Multispectral Imaging Systems},
  booktitle = {The 22nd Color and Imaging Conference (CIC)},
  address = {Boston, MA, USA},
  month = {Nov},
  publisher = {IS\&T},
  year = {2014},
  pages = {107--112},
  url = {http://www.ingentaconnect.com/content/ist/cic/2014/00002014/00002014/art00018}
}
Abstract: We propose a new method of LED matrix/panel design for use as active illumination in a multispectral acquisition system. The number and types of LEDs are first determined. The desired probability of appearance of different LEDs are then determined based on their luminous intensity profiles. The spectral sensitivity of the camera has also been accounted for. The method determines the number of different types of LEDs to form a smallest block (usually a square) in the LED matrix, and distributes them so that the LED matrix fulfills the two important design requirements: spatial uniformity and consistency of LED distribution, and that it leads to the generation of an optimal or suboptimal arrangement of the LEDs. A LED panel of any size can then be constructed by repeating the block. We confirm the effectiveness of our proposal by simulation, and also validate with real LEDs.
BibTeX:
@inproceedings{Shrestha2013a,
  author = {Raju Shrestha and Jon Yngve Hardeberg},
  title = {LED Matrix Design for Multispectral Imaging},
  booktitle = {12th Congress of the International Colour Association (AIC)},
  address = {Newcastle, UK},
  month = {July},
  year = {2013}
}
BibTeX:
@inproceedings{Shrestha2013c,
  author = {Raju Shrestha and Jon Yngve Hardeberg},
  title = {Multispectral Imaging Using LED Illumination and an RGB Camera},
  booktitle = {The 21st Color and Imaging Conference (CIC)},
  address = {Albuquerque, NM, USA},
  month = {Nov},
  publisher = {IS\&T and SID},
  year = {2013},
  pages = {8--13}
}
Abstract: We have proposed a new illuminant estimation technique based on extension of chromagenic based color constancy in this paper. Basic chromagenic illuminant estimation method takes two shots of a scene, one without and one with a specially chosen color filter in front of the camera lens. Here, we introduce chromagenic filters on top of R, G or B filters in place of one of the two green filters in the Bayer's pattern. Introduction of chromagenic filters allow to obtain two images of the same scene via demosaicking, a normal RGB image, and a chromagenic image, equivalent of RGB image with a chromagenic filter. The illuminant can then be estimated using chromagenic based illumination estimation algorithms. The method, therefore, does not require two shots and no registration issues involved unlike as in the basic chromagenic filter based color constancy, making it more practical and useful computational color constancy method in many applications.
BibTeX:
@inproceedings{Shrestha2012,
  author = {Raju Shrestha and Jon Yngve Hardeberg},
  title = {Computational color constancy using chromagenic filters in color filter arrays},
  booktitle = {Sensors, Cameras, and Systems for Industrial/Scientific Applications XIII},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2012},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {8298},
  pages = {8298-25},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=1284115},
  doi = {10.1117/12.912073}
}
Abstract: Chromagenic color constancy is one of the promising solutions to the color constancy problem. However, this technique requires two shots of a scene: a conventional RGB image and an additional image that is optically pre-filtered using a chromagenic filter. This severely limits the usefulness of chromagenic based color constancy algorithms to static scenes only. In this paper we propose a solution to this with the use of a digital stereo camera, where we place the chromagenic filter in front of one of the lenses of the stereo camera. This allows capturing two images of a scene, one unfiltered and one filtered, in one shot. An illuminant can then be estimated using chromagenic based illumination estimation methods. Since more and more digital stereo cameras are being commercially available, the system can be built quite easily, and being a one shot solution, it is a practical computational color constancy method that could be useful in many applications. Experiments with a modern commercial digital stereo camera show promising results.
BibTeX:
@inproceedings{Shrestha2012a,
  author = {Raju Shrestha and Jon Yngve Hardeberg},
  title = {Computational color constancy using a stereo camera},
  booktitle = {6th European Conference on Colour in Graphics, Imaging, and Vision (CGIV)},
  address = {Amsterdam, Netherland},
  month = {May},
  year = {2012},
  pages = {69--74}
}
Abstract: The development of faster and more cost effective acquisition systems is very important for the widespread use of multispectral imaging. This paper studies the feasibility of using two commercially available RGB cameras, each equipped with an optical filter, as a six channel multispectral image capture system. The main idea is to pick the best pair of filters from among readily available filters that modifies the sensitivities of the two cameras in such a way that their dominant wavelengths spread well spaced throughout the visible spectrum. Simulations with reasonably large number of available filters show encouraging result clearly indicating the possibility of using such systems.
BibTeX:
@inproceedings{Shrestha2010,
  author = {Raju Shrestha and Jon Yngve Hardeberg},
  title = {Multispectral Image Capture using two RGB cameras},
  booktitle = {Proceedings of the European Signal Processing Conference (EUSIPCO)},
  address = {Aalborg, Denmark},
  month = {August},
  year = {2010},
  pages = {1801--1805},
  url = {http://www.eurasip.org/Proceedings/Eusipco/Eusipco2010/Contents/proceedings.html}
}
Abstract: LED (Light Emitting Diode) based spectral imaging is advantageous for its fast computer controlled switching ability, availability of many different types of LEDs and cost effectiveness. It has been used in some applications like biometrics and arts, however, it has not been explored in film scanning. Here in this paper, we have proposed a LED based spectral film scanner that allows acquiring spectral data and at the same time producing more accurate digital color images. Such a system, in practice, is constrained by the limit in the number of LEDs to be used. We have studied the performance of the system also, under the influence of the number of LEDs. Simulation experiments show that the system is capable of acquiring accurate color images with a fairly reasonable number of LEDs. We have also investigated the influence of noise on the number of LEDs, and it shows that the noise plays some part on the number of LEDs to be used.
BibTeX:
@inproceedings{Shrestha2012c,
  author = {Raju Shrestha and Jon Yngve Hardeberg and Clotilde Boust},
  title = {{LED} based multispectral film scanner for accurate color imaging},
  booktitle = {The 8th International Conference on Signal Image Technology and Internet based Systems (SITIS)},
  address = {Sorrento, Naples, Italy},
  month = {November},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {811--817},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6395174}
}
Abstract: In the past few years there has been a significant volume of research work carried out in the field of multispectral image acquisition. The focus of most of these has been to facilitate a type of multispectral image acquisition systems that usually requires multiple subsequent shots (e.g. systems based on filter wheels, liquid crystal tunable filters, or active lighting). Recently, an alternative approach for one-shot multispectral image acquisition has been proposed; based on an extension of the color filter array (CFA) standard to produce more than three channels. We can thus introduce the concept of multispectral color filter array (MCFA). But this field has not been much explored, particularly little focus has been given in developing systems which focuses on the reconstruction of scene spectral reflectance.

In this paper, we have explored how the spatial arrangement of multispectral color filter array affects the acquisition accuracy with the construction of MCFAs of different sizes. We have simulated acquisitions of several spectral scenes using different number of filters/channels, and compared the results with those obtained by the conventional regular MCFA arrangement, evaluating the precision of the reconstructed scene spectral reflectance in terms of spectral RMS error, and colorimetric $Delta E^*_ab$ color differences. It has been found that the precision and the the quality of the reconstructed images are significantly influenced by the spatial arrangement of the MCFA and the effect will be more and more prominent with the increase in the number of channels. We believe that MCFA-based systems can be a viable alternative for affordable acquisition ofmultispectral color images, in particular for applications where spatial resolution can be traded off for spectral resolution. We have shown that the spatial arrangement of the array is an important design issue.

BibTeX:
@inproceedings{Shrestha2011a,
  author = {Raju Shrestha and Jon Yngve Hardeberg and Rahat Khan},
  title = {Spatial Arrangement of Color Filter Array for Multispectral Image Acquisition},
  booktitle = {Sensors, Cameras, and Systems for Industrial, Scientific, and Consumer Applications XII},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2011},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {7875},
  pages = {787502},
  doi = {10.1117/12.872253}
}
Abstract: Multispectral color imaging is a promising technology, which can solve many of the problems of traditional RGB color imaging. However, it still lacks widespread and general use because of its limitations. State of the art multispectral imaging systems need multiple shots making it not only slower but also incapable of capturing scenes in motion. Moreover, the systems are mostly costly and complex to operate. The purpose of the work described in this paper is to propose a one shot six-channel multispectral color image acquisition system using a stereo camera or a pair of cameras in a stereoscopic configuration, and a pair of optical filters. The best pair of filters is selected from among readily available filters such that they modify the sensitivities of the two cameras in such a way that they get spread reasonably well throughout the visible spectrum and gives optimal reconstruction of spectral reflectance and/or color. As the cameras are in a stereoscopic configuration, the system is capable of acquiring 3D images as well, and stereo matching algorithms provide a solution to the image alignment problem. Thus the system can be used as a two-in-one multispectral-stereo system. However, this paper mainly focuses on the multispectral part. Both simulations and experiments have shown that the proposed system performs well spectrally and colorimetrically.
BibTeX:
@inproceedings{Shrestha2011,
  author = {Raju Shrestha and Jon Yngve Hardeberg and Alamin Mansouri},
  title = {One-Shot Multispectral Color Imaging with a Stereo Camera},
  booktitle = {Digital Photography VII},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2011},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {7876},
  pages = {787609},
  doi = {10.1117/12.872428}
}
Abstract: This paper proposes a one-shot six-channel multispectral color image acquisition system using a stereo camera and a pair of optical filters. The two filters from the best pair, selected from among readily available filters such that they modify the sensitivities of the two cameras in such a way that they produce optimal estimation of spectral reflectance and/or color, are placed in front of the two lenses of the stereo camera. The two images acquired from the stereo camera are then registered for pixel-to-pixel correspondence. The spectral reflectance and/or color at each pixel on the scene are estimated from the corresponding camera outputs in the two images. Both simulations and experiments have shown that the proposed system performs well both spectrally and colorimetrically. Since it acquires the multispectral images in one shot, the proposed system can solve the limitations of slow and complex acquisition process, and costliness of the state of the art multispectral imaging systems, leading to its possible uses in widespread applications.
BibTeX:
@article{Shrestha2011b,
  author = {Raju Shrestha and Alamin Mansouri and Jon Yngve Hardeberg},
  title = {Multispectral Imaging using a Stereo Camera: Concept, Design and Assessment},
  month = {September},
  journal = {EURASIP Journal on Advances in Signal Processing},
  year = {2011},
  series = {Multispectral and Hyperspectral Image and Video Processing},
  volume = {2011},
  number = {1},
  url = {http://asp.eurasipjournals.com/content/2011/1/57},
  doi = {10.1186/1687-6180-2011-57}
}
Abstract: Spectral imaging has many advantages over conventional three channel colour imaging, and has numerous applications in many domains. Despite many benefits, applications, and different techniques being proposed, little attention has been given to the evaluation of the quality of spectral images and of spectral imaging systems. There has been some research in the area of spectral image quality, mostly targeted at specific application domains. This paper seeks to provide a comprehensive review on existing research in the area of spectral image quality metrics. We classify existing spectral image quality metrics into categories based on how they were developed, their main features, and their intended applications. Spectral quality metrics, in general, aim to measure the quality of spectral images without considering specifically the imaging systems used to acquire the images. Having many different types of spectral imaging systems that could be used to acquire spectral images in an application, it is important to evaluate the performance/quality of these spectral imaging systems too. However, to our knowledge, not much attention has been given in this direction previously. As a first step towards this, we aim to identify different factors that influence the quality of the spectral imaging systems. In almost every stage of a spectral imaging workflow, there may be one or more factors that influence the quality of the final spectral image, and hence the imaging system used for acquiring the image. Identification of these factors, we believe, will be essential in developing a framework, for evaluating the quality of spectral imaging systems.
BibTeX:
@article{Shrestha2014b,
  author = {Raju Shrestha and Ruven Pillay and Sony George and Jon Yngve Hardeberg},
  title = {Quality evaluation of spectral imaging: Quality factors and metrics},
  journal = {Journal of the International Colour Association (AIC)},
  year = {2014},
  volume = {12},
  pages = {22--35},
  url = {http://aic-colour-journal.org/index.php/JAIC/article/view/147}
}
BibTeX:
@mastersthesis{SIAKIDES2013,
  author = {Christos SIAKIDES},
  title = {Automatic annotation of lecture videos for HIP},
  school = {Gj{\o}vik University College},
  year = {2013}
}
BibTeX:
@mastersthesis{SIMON2013,
  author = {Thomas SIMON},
  title = {Mixed illuminant colour correction for videoconferencing},
  school = {Gj{\o}vik University College},
  year = {2013}
}
BibTeX:
@article{Simone2014,
  author = {Gabriele Simone and Giuseppe Audino and Ivar Farup and Fritz Albregtsen and Alessandro Rizzi},
  title = {Termite Retinex: a new implementation based on a colony of intelligent agents},
  journal = {J. Electron. Imaging},
  year = {2014},
  volume = {23},
  number = {1},
  pages = {013006},
  doi = {10.1117/1.JEI.23.1.013006}
}
Abstract: This paper describes a novel implementation of the Retinex algorithm with the exploration of the image made by an ant swarm. In this case the purpose of the ant colony is not the optimization of some constraint but the exploration as diffused as possible of the image content, with the possibility of tuning the exploration parameters on the image content. For this reason, this approach is called ``termites'', instead of ants, to underline the idea of the eager exploration of the image. The paper presents the spatial characteristics of locality and discusses differences with other Retinex implementation.
BibTeX:
@inproceedings{Simone2012,
  author = {Gabriele Simone and Giuseppe Audino and Ivar Farup and Alessandro Rizzi},
  title = {Termites: a Retinex implementation based on a colony of agents},
  booktitle = {Color Imaging XVII: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2012},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {8292},
  pages = {8292-23},
  url = {http://brage.bibsys.no/hig/handle/URN:NBN:no-bibsys_brage_28055}
}
Abstract: In this paper we investigate if the Difference of Gaussians model is able to predict observers perceived difference in relation to compression artifacts. A new image difference metric for specifically designed for compression artifacts is proposed. In order to evaluate this new metric a psychophysical experiment is carried out, where a dataset of 80 compressed JPEG and JPEG2000 images were generated from 10 different scenes. The results of the psychophysical experiment with 18 observers and the quality scores obtained from a large number of image difference metrics are presented.
Furthermore, a quantitative study based on a number of image difference metrics and five additional databases is performed in order to reveal the potential of the proposed metric. The analyses show that the proposed metric and most of the tested ones do not correlate well with the subjective test results, and thus the increased complexity of the recent metrics is not justified.
BibTeX:
@inproceedings{Simone2010a,
  author = {Gabriele Simone and Valentina Caracciolo and Marius Pedersen and Faouzi Alaya Cheikh},
  title = {Evaluation of a Difference of Gaussians Based Image Difference Metric in Relation to Perceived Compression Artifacts},
  booktitle = {Advances in Visual Computing - 6th International Symposium},
  address = {Las Vegas, NV},
  month = {Nov},
  publisher = {Springer},
  year = {2010},
  series = {Lecture Notes in Computer Science},
  pages = {491-500}
}
Abstract: Many algorithms for spatial color correction of digital im- ages have been proposed in the past. Some of the most recently developed algorithms use stochastic sampling of the image in or- der to obtain maximum and minimum envelope functions. The envelopes are in turn used to guide the color adjustment of the en- tire image. In this paper, we propose to use a variational method instead of the stochastic sampling to compute the envelopes. A numerical scheme for solving the variational equations is out- lined, and we conclude that the variational approach is computa- tionally more efficient than using stochastic sampling. A percep-tual experiment with 20 observers and 13 images is carried out in order to evaluate the quality of the resulting images with the two approaches. There is no significant difference between the variational approach and the stochastic sampling when it comes to overall image quality as judged by the observers. However, the observed level of noise in the images is significantly reduced by the variational approach.
BibTeX:
@inproceedings{Simone2012b,
  author = {Gabriele Simone and Ivar Farup},
  title = {Spatio-Temporal Retinex-like Envelope with Total Variation},
  booktitle = {6th European Conference on Colour in Graphics, Imaging, and Vision (CGIV)},
  address = {Amsterdam, Netherland},
  month = {May},
  year = {2012},
  pages = {176--181}
}
BibTeX:
@conference{Simone2008,
  author = {Gabriele Simone and Claudio Oleari},
  title = {Software with Visual Phenomena, Tests, and Standard Colorimetric Computations for Didactics and Laboratory},
  booktitle = {Sixteenth Color Imaging Conference},
  address = {Portland, Oregon},
  month = {Nov},
  year = {2008}
}
BibTeX:
@inproceedings{Simone2009c,
  author = {Gabriele Simone and Claudio Oleari and Ivar Farup},
  title = {An Alternative Color Difference Formula for Computing Image Difference},
  booktitle = {Proceedings from Gj{\o}vik Color Imaging Symposium 2009},
  address = {Gj{\o}vik, Norway},
  month = {Jun},
  year = {2009},
  series = {H{\o}gskolen i Gj{\o}viks rapportserie},
  number = {4},
  pages = {8-11},
  url = {http://brage.bibsys.no/hig/bitstream/URN:NBN:no-bibsys_brage_9313/3/sammensatt_elektronisk.pdf}
}
Abstract: In this paper, we approach color-image-difference metrics by a Euclidean color-difference formula for small-medium color differences in log-compressed OSA-UCS space, recently published (C. Oleari, M. Melgosa and R. Huertas, J. Opt. Soc. Am. A, 26(1):121–134, 2009). We start from previous imagedifference metrics by replacing the CIE color-difference formulae with the new one. Tests are made by using the Pearson-, Spearman- and Kendall-correlation coefficient. Particularly, we compare the
calculated image-difference metrics in relation to the perceived image difference obtained with psychophysical experiments. Current results show improvements in the actual state of art, making this formula the future key for image- difference metrics.
BibTeX:
@conference{Simone2009d,
  author = {Gabriele Simone and Claudio Oleari and Ivar Farup},
  title = {PERFORMANCE OF THE EUCLIDEAN COLOR-DIFFERENCE FORMULA IN LOG-COMPRESSED OSA-UCS SPACE APPLIED TO MODIFIED-IMAGE-DIFFERENCE METRICS},
  booktitle = {11th Congress of the International Colour Association (AIC)},
  address = {Sydney, Australia},
  month = {Sep},
  year = {2009}
}
Abstract: In this paper, we present a new metric to estimate the perceived difference in contrast between an original image and a reproduction. This metric, named weighted-level framework ?EE (WLF-DEE), implements a multilevel filtering based on the difference of Gaussians model proposed by Tadmor and Tolhurst (2000) and the new Euclidean color difference formula in log-compressed OSA-UCS space proposed by Oleari et al. (2009). Extensive tests and analysis are presented on four different categories belonging to the well-known Tampere Image Database and on two databases developed at our institution, providing different distortions directly related to color and contrast. Comparisons in performance with other state-of-the-art metrics are also pointed out. Results promote WLF-DEE as a new stable metric for estimating the perceived magnitude of contrast between an original and a reproduction.
BibTeX:
@article{Simone2013,
  author = {Gabriele Simone and Marius Pedersen and Ivar Farup and Claudio Oleari},
  title = {Multi-level contrast filtering in image difference metrics},
  journal = {EURASIP Journal on Image and Video Processing},
  year = {2013},
  volume = {2013},
  url = {http://jivp.eurasipjournals.com/content/2013/1/39}
}
BibTeX:
@conference{Simone2010,
  author = {Gabriele Simone and Marius Pedersen and Jon Hardeberg},
  title = {Measuring perceptual contrast in uncontrolled environments},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {Jul},
  year = {2010}
}
Abstract: In this paper we present a novel method to measure perceptual contrast in digital images. We start from a previous measure of contrast developed by Rizzi et al. [26], which presents a multilevel analysis. In the first part of the work the study is aimed mainly at investigating the contribution of the chromatic channels and whether a more complex neighborhood calculation can improve this previous measure of contrast. Following this, we analyze in detail the contribution of each level developing a weighted multilevel framework. Finally, we perform an investigation of Regions-of-Interest in combination with our measure of contrast. In order to evaluate the performance of our approach, we have carried out a psychophysical experiment in a controlled environment and performed extensive statistical tests. Results show an improvement in correlation between measured contrast and observers perceived contrast when the variance of the three color channels separately is used as weighting parameters for local contrast maps. Using Regions-of-Interest as weighting maps does not improve the ability of contrast measures to predict perceived contrast in digital images. This suggests that Regions-of-Interest cannot be used to improve contrast measures, as contrast is an intrinsic factor and it is judged by the global impression of the image. This indicates that further work on contrast measures should account for the global impression of the image while preserving the local information.
BibTeX:
@article{Simone2012a,
  author = {Gabriele Simone and Marius Pedersen and Jon Yngve Hardeberg},
  title = {Measuring Perceptual Contrast in Digital Images},
  month = {April},
  journal = {Journal of Visual Communication and Image Representation},
  year = {2012},
  volume = {23},
  number = {3},
  pages = {491--506},
  url = {http://www.sciencedirect.com/science/article/pii/S1047320312000211}
}
Abstract: In this paper, we propose and discuss a novel approach for measuring perceived contrast. The proposed method comes from the modification of previous algorithms with a different local measure of contrast and with a parameterized way to recombine local contrast maps and color channels. We propose the idea of recombining the local contrast maps using gaze information, saliency maps and a gaze-attentive fixation finding engine as weighting parameters giving attention to regions that observers stare at, finding them important. Our experimental results show that contrast measures cannot be improved using different weighting maps as contrast is an intrinsic factor and it’s judged by the global impression of the image.
BibTeX:
@inproceedings{Simone2009b,
  author = {Gabriele Simone and Marius Pedersen and Jon Yngve Hardeberg and Ivar Farup},
  title = {On the use of gaze information and saliency maps for measuring perceptual contrast},
  booktitle = {16th Scandinavian Conference on Image Analysis},
  address = {Oslo, Norway},
  month = {Jun},
  year = {2009},
  series = {Lecture Notes in Computer Science},
  volume = {5575},
  pages = {597--606},
  url = {http://www.springerlink.com/link.asp?id=105633}
}
BibTeX:
@inproceedings{Simone2009,
  author = {Gabriele Simone and Marius Pedersen and Jon Yngve Hardeberg and Alessandro Rizzi},
  title = {Measuring perceptual contrast in a multilevel framework},
  booktitle = {Human Vision and Electronic Imaging XIV},
  month = {Jan},
  publisher = {SPIE},
  year = {2009},
  volume = {7240}
}
BibTeX:
@article{Simone2008a,
  author = {Gabriele Simone and Marius Pedersen and Jon Yngve Hardeberg and Alessandro Rizzi},
  title = {A multi-level framework for measuring perceptual image contrast},
  month = {Oct},
  journal = {Scandinavian Journal of Optometry and Visual Science},
  year = {2008},
  volume = {1},
  number = {1},
  pages = {15},
  url = {http://www.synsinformasjon.no/Optikeren/pop.cfm?FuseAction=Doc&pAction=View&pDocumentId=17216}
}
BibTeX:
@inproceedings{Simon-Liedtke2015c,
  author = {Joschua Simon-Liedtke},
  title = {Ethical considerations on gene therapy for color-defcient people},
  booktitle = {Mid-term meeting of the International Colour Association (AIC)},
  address = {Tokyo, Japan},
  month = {May},
  year = {2015}
}
BibTeX:
@inproceedings{Simon-Liedtke2015d,
  author = {Joschua Simon-Liedtke},
  title = {Colorama: Extra color sensation for the color-deficient with gene therapy and modal augmentation},
  booktitle = {Mid-term meeting of the International Colour Association (AIC)},
  address = {Tokyo, Japan},
  month = {May},
  year = {2015}
}
BibTeX:
@inproceedings{Simon-Liedtke2015a,
  author = {Joschua Simon-Liedtke and Ivar Farup},
  title = {Spatial Intensity Channel Replacement Daltonization (SIChaRDa)},
  booktitle = {Color Imaging XX: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9395},
  pages = {9395-43},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2109921}
}
BibTeX:
@inproceedings{Simon-Liedtke2015b,
  author = {Joschua Simon-Liedtke and Ivar Farup},
  title = {Empirical disadvantages for color-deficient people},
  booktitle = {Mid-term meeting of the International Colour Association (AIC)},
  address = {Tokyo, Japan},
  month = {May},
  year = {2015}
}
BibTeX:
@inproceedings{Simon-Liedtke2015,
  author = {Joschua Simon-Liedtke and Ivar Farup and Bruno Laeng},
  title = {Evaluating color deficiency simulation and daltonization methods through visual search and sample-to-match: SaMSEM and ViSDEM},
  booktitle = {Color Imaging XX: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9395},
  pages = {9395-40},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2109918}
}
BibTeX:
@inproceedings{Simon-Liedtke2013,
  author = {Joschua Thomas Simon-Liedtke and Jon Yngve Hardeberg},
  title = {Task-Based Accessibility Measurement of Daltonization Algorithms for Information Graphics},
  booktitle = {12th Congress of the International Colour Association (AIC)},
  address = {Newcastle, UK},
  month = {July},
  year = {2013}
}
Abstract: Computational color constancy or white balancing methods for digital cameras emulate the ability of the human visual system to adapt to different lighting situations and to maintain color constancy. Global white balancing algorithms have been shown to give remarkable results for scenes
illuminated by one light source, but proven less adequate for multi-illumination scenes where multiple light sources are present. Using information from an additional near-infrared channel can be used to estimate the white point at every pixel in the image by comparing the pixels' NRGB values
to a multi-dimensional lookup table with precomputed NRGB values. This estimated white point can then be used for white balancing via linearized Bradford transform. The lookup table requires measurement of multiple reflectance and illumination spectra that are representative for an office
environment. The method performs better than conventional global white balancing methods.
BibTeX:
@inproceedings{Simon-Liedtke2014,
  author = {Joschua Thomas Simon-Liedtke and Per Ove Husøy and Jon Yngve Hardeberg},
  title = {Pixel-Wise Illuminant Estimation for Mixed Illuminant Scenes based on Near-Infrared Information,},
  booktitle = {The 22nd Color and Imaging Conference (CIC)},
  address = {Boston, MA, USA},
  month = {Nov},
  publisher = {IS\&T},
  year = {2014},
  pages = {217-221},
  url = {http://www.ingentaconnect.com/content/ist/cic/2014/00002014/00002014/art00038}
}
Abstract: HDR is a field in image processing that has received a lot of attention in the later years. Techniques for capturing, tone map back to viewable data has been proposed. Many different ideas have been pursuited, some with a background in the Human Visual System (HVS), but the same problem with determining the quality of these reproductions still exist. In low dynamic range imaging, the solution to the problem has been either to do a visual inspection and compare the reproduction against an original, but as this is a labour intensive and time consuming and highly subjective process, and the need for automated measures which can predict quality has resulted in different image difference metrics. As for comparison of HDR and LDR, this is no trivial task. Currently, no method of automated comparison has been deemed a viable solution due to the difference in in dynamic range. In this master thesis, we present a novel framework extending on recent research which enables us to compare HDR and LDR content, and from this using standard image difference metrics to evaluate the quality of these. These measures are tested against data from a perceptual experiment to verify the stability and quality of the framework. Initial results indicate that the proposed framework enables us to evaluate the quality of such reproductions on the tested scenes, but that some problems are still unsolved.
BibTeX:
@mastersthesis{Skjerven2010,
  author = {J{\o}rn Skjerven},
  title = {The Performance of Image Difference Metrics for Rendered HDR Images},
  school = {Gj{\o}vik University College},
  year = {2011},
  url = {http://brage.bibsys.no/hig/handle/URN:NBN:no-bibsys_brage_21057}
}
BibTeX:
@mastersthesis{Slavkovikj2011,
  author = {Viktor Slavkovikj},
  title = {Color Calibration of a Multi-camera array},
  school = {Gj{\o}vik University College},
  year = {2011}
}
Abstract: The advance and rapid development of electronic imaging technology has lead the way to production of imaging sensors capable of acquiring good quality digital images with a high resolution. At the same time the cost and size of imaging devices have reduced. This has incited an increasing research interest for techniques that use images obtained by multiple camera arrays. Use of multi-camera arrays is attractive because it allows capturing of multi-view images of dynamic scenes, enabling the creation of novel computer vision and computer graphics applications, as well as next generation video and television systems. There are additional challenges when using a multi-camera array, however. Due to inconsistencies in the fabrication process of imaging sensors and filters, multi-camera arrays exhibit inter-camera color response variations. For the majority of applications, which use multi-view images obtained from multi-camera arrays, it is insufficient to assume that the different camera's response can be considered the same without prior verification. Therefore, it is necessary to characterize the response of the different cameras in the array.
BibTeX:
@inproceedings{Slavkovikj2012,
  author = {Viktor Slavkovikj and Jon Yngve Hardeberg},
  title = {Characterizing the response of charge-couple device digital color cameras},
  booktitle = {Sensors, Cameras, and Systems for Industrial/Scientific Applications XIII},
  address = {San Francisco, CA, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2012},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {8298},
  pages = {8298-14},
  url = {http://brage.bibsys.no/hig/handle/URN:NBN:no-bibsys_brage_28056}
}
BibTeX:
@inproceedings{Slavuj2015,
  author = {Radovan Slavuj and Ludovic G. Coppel and Jon Yngve Hardeberg},
  title = {Effect of ink spreading and ink amount on the accuracy of the Yule-Nielsen modified spectral Neugebauer model},
  booktitle = {Color Imaging XX: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9395},
  pages = {9395-13},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2109897}
}
Abstract: Many paper substrates are not capable of receiving more than 280% of ink amount per patch without ink bleeding, preventing the creation of the Neubebauer primaries (NP) training set for those substrates. This work estimates the Neugebauer primaries of a 7-colour printer on a highly absorbing substrate allowing 700% ink amount using Kubelka-Munk and general radiative transfer theory. Both models are predicting K/S factor based on the input reflectance of the primary inks and assume linear mixing. General radiative transfer is angle-resolved and allows simulating reflectance anisotropy in different measurement geometries. The results show acceptable CIE ?E*00 and a CIE ?E*00 reduction of about 20% when using general radiative transfer theory instead of Kubelka-Munk. For less absorbing substrates, the NP estimation method is tested by using these estimations into the Yule-Nielsen modified spectral Neugebauer model. Another advantage of using general radiative transfer is that it can simulate different measurement geometries. This would enable simulation of the otherwise tedious measurement procedure with d/8 instruments.
BibTeX:
@inproceedings{Coppel2014,
  author = {Radovan Slavuj and Ludovic G. Coppel and Melissa Olen and Jon Yngve Hardeberg},
  title = {Estimating neugebauer primaries for multi-channel spectral printing modeling},
  booktitle = {MMRMA},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2014},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9018},
  pages = {9018-11},
  url = {http://spie.org/EI/conferencedetails/measuring-modeling-reproducing-material-appearance}
}
Abstract: Spectral reflectance represents physical information of an object surface. In a conventional colour management system the spectral reflectance will be converted to the common space which will describe object tristimulus value or how will that particular colour will look under some of the standard illuminants or real light sources. However, many applications will require the object reflection to be known and independent on the viewing illuminant.
This work will address this need and give an effort to reconstruct spectral reflectance of an imaging surface by using RGB camera signals as an input. The camera has been characterized by direct measurement of the camera sensitivity curves and three of the most used printing technologies are employed to obtain test samples.
By using decomposition and dimension reduction techniques like PCA and Wiener estimate and the domain of linear algebra, evaluation of a method performance by varying different parameters will be performed. Here, the accent will be on formation of the basis vectors and covariance matrix. Additional optimization will be introduced to try to model for used printed samples.
Observations show that if most of the parameters are carefully controlled that spectral reflectance is more or less satisfying and that is dependable on the sample used. Improvements in the analysis have given better ap-proximations and after optimization have been employed, the reconstruction method could be used in many applications in graphic arts area.
BibTeX:
@conference{Slavuj2013b,
  author = {Radovan Slavuj and Phil Green},
  title = {To develop a method of estimating spectral reflectance from camera RGB values},
  booktitle = {Colour and Visual Computing Symposium (CVCS)},
  month = {Sept},
  publisher = {IEEE},
  year = {2013}
}
Abstract: This study has investigated how the growing technology of multichannel printing and the area of spectral printing in the graphic arts could help the textile industry to communicate accurate colour. In order to reduce the cost, printed samples that serve for colour judgment and decision making in the design process are required. With the increased colour gamut of multichannel printing systems we are expecting to include most of the colours of textile samples. The results show that with careful control of ink limits and with bypassing the colour management limitations imposed on printing system; we are able to include more than 90% of colour textile samples within the multichannel printer colour gamut. Also we evaluated how much textile colours spectra we can print with multichannel printers. This gives a basis for further work in the area of spectral printing, and particularly for the application area discussed in this study. By comparison of spectral gamuts, we also conclude here that it is possible to print around 75% of all reflectance from textile colours.
BibTeX:
@article{Slavuj2014,
  author = {Radovan Slavuj and Kristina Marijanovic and Jon Y Hardeberg},
  title = {Colour and spectral simulation of textile samples onto paper: a feasibility study},
  journal = {Journal of the International Colour Association (AIC)},
  year = {2014},
  volume = {12},
  pages = {36--43},
  url = {http://aic-colour-journal.org/index.php/JAIC/article/view/148}
}
BibTeX:
@inproceedings{Slavuj2013,
  author = {Radovan Slavuj and Kristina Marijanovic and Jon Yngve Hardeberg},
  title = {Feasibility study for textile color simulation with multichannel printing technology},
  booktitle = {12th Congress of the International Colour Association (AIC)},
  address = {Newcastle, UK},
  month = {July},
  year = {2013}
}
BibTeX:
@conference{Slavuj2013a,
  author = {Radovan Slavuj and Peter Nussbaum and Jon Yngve Hardeberg},
  title = {Review and analysis of spectral characterization models and halftoning for multi-channel printing},
  booktitle = {iarigai},
  address = {Chemnitz, Germany},
  month = {Sep},
  year = {2013}
}
BibTeX:
@inproceedings{Slavuj2015a,
  author = {Radovan Slavuj and Marius Pedersen},
  title = {Multichannel DBS halftoning for improved texture quality},
  booktitle = {Color Imaging XX: Displaying, Processing, Hardcopy, and Applications},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9395},
  pages = {9395-17},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2109900}
}
Abstract: Gonio-spectrometers and multi-angle spectrophotometers are successfully used for performing multi-angle measurements of non-diffuse materials like metallic inks and paints used in the print and packaging industry and car paint industry. Using a gonio-spectrometer to measure the amount
of light reflected at different incident and reflection angles is a time consuming and an expensive process and is mainly performed in laboratories for research purposes.

In order to perform multi-angle planar measurements, at relatively cheaper and faster way, we use a geometrical method
which can be used with an image based measurement setup to measure such materials. The image based measurement setup help record the light reflected from the sample, in the digital pixel array sensor. The geometrical method estimates the incident (i) and reflection (r)
angles at a given point (P) on the sample surface. It also maps the pixel positions on the camera sensor array to the corresponding point (P) on the sample surface. This information can therefore be used to understand the amount of light incident and reflected from a given point (P) on the
sample surface and record it accordingly.

The proposed measurement setup can be used in, for example packaging industry, to perform online gonio-metric measurements during material reproduction process and estimate the incident and reflection angles of homogeneous flexible object materials
when measuring light incident and reflected from the sample at different angles.

The results obtained show that the geometrical method corrects for the geometrical distortions and estimate the incident (i) and reflection (r) angles successfully.
BibTeX:
@inproceedings{Sole2014,
  author = {Aditya Sole and Ivar Farup},
  title = {An Image based Multi-Angle Method for Estimating Reflection Geometries of Flexible Objects},
  booktitle = {The 22nd Color and Imaging Conference (CIC)},
  address = {Boston, MA, USA},
  month = {Nov},
  publisher = {IS\&T},
  year = {2014},
  pages = {91--96},
  url = {http://www.ingentaconnect.com/content/ist/cic/2014/00002014/00002014/art00015}
}
BibTeX:
@inproceedings{Sole2015,
  author = {Aditya Sole and Ivar Farup and Shoji Tominaga},
  title = {An image-based multi-directional reflectance measurement setup for flexible objects},
  booktitle = {Measuring, Modeling, and Reproducing Material Appearance},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9398},
  pages = {9398-18}
}
Abstract: This paper defines one of the many ways to setup a soft proofing workstation comprising of a monitor display and viewing booth in a printing workflow as per the Function 4 requirements of PSO certification. Soft proofing requirements defined by ISO 12646 are explained and are implemented in this paper. Nec SpectraView LCD2180WG LED display along with Just colorCommunicator 2 viewing booth and X-rite EyeOne Pro spectrophotometer are used in this setup. Display monitor colour gamut is checked for its ability to simulate the ISO standard printer profile (ISOcoated_v2_300_eci.icc) as per the ISO 12646 requirements.

Methods and procedures to perform ambient light measurements and viewing booth measurements using EyeOne Pro spectrophotometer are explained. Adobe Photoshop CS4 software is used to simulate the printer profile on to the monitor display, while, Nec SpectraView Profiler software is used to calibrate and characterize the display and also to perform ambient light and viewing booth measurements and adjustments.

BibTeX:
@conference{Sole2010,
  author = {Aditya Sole and Peter Nussbaum and Jon Yngve Hardeberg},
  title = {Implementing ISO12646 standards for soft proofing in a standardized printing workflow according to PSO},
  booktitle = {iarigai},
  address = {Montreal, Canada},
  month = {Sep},
  year = {2010},
  keywords = {Colour measurement, colour management, process control standards, soft proofing, display
calibration, display characterisation}
}
Abstract: This is an analysis of how digital projector displays work under different conditions and lightning. Both the portable and the mounted projectors at Gjøvik University College have been tested under four different conditions: dark and light room with and without an ICC-profile. To find out more about the importance of the lightning conditions in a room and the level of improvement when using an ICC-profile, the results from the measuring was processed and analyzed. Eye-One Beamer was used to make the profile. Eye-One is a low cost product compared to spectroradiometers, which is commonly used when creating a profile for various equipments. The results from the analysis indicated great visual differences between the projectors. DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the canvas is of great importance for the visual impression. If to much reflections and other ambient light reach the canvas, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. The color blue has the greatest variations among the projector displays and makes it harder to predict. Red and green have generally the same color gamut, but green is the most stabile one.
BibTeX:
@mastersthesis{Strand2005,
  author = {Monica Strand},
  title = {Karakterisering og profilering av projektorer},
  school = {Gj{\o}vik University College},
  year = {2005},
  url = {http://www.colorlab.no/content/download/21938/215659/file/Monica_Strand_Master_thesis.pdf}
}
Abstract: Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average DelteE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.
BibTeX:
@inproceedings{Strand2005a,
  author = {Monica Strand and Jon Yngve Hardeberg and Peter Nussbaum},
  title = {Color image quality in projection displays: a case study},
  booktitle = {Image Quality and System Performance II},
  address = {San Jose, California},
  month = {Jan},
  year = {2005},
  pages = {185-195},
  note = {ISBN / ISSN: 0-8194-5641-1}
}
Abstract: In the context of color imaging, this thesis focuses on colorimetric characterization of displays and multi-display systems. Starting from the conventional pointwise approach we continue to some spatial analysis. We give some special attention to the duality between a professional and a consumer-oriented characterization.
In the first part of this thesis we consider pointwise display color characterization. We propose,evaluate and improve several methods to control the color in displays.
We investigate deeply the PLVC (Piecewise Linear assuming Variation in Chromaticity) model especially in comparison to the PLCC (Piecewise Linear assuming Chromaticity Constancy) model. We show that this model can be highly beneficial for LCD (LiquidCrystal Display) technology. We evaluate and improve a end-user method proposed by Bala and Braun. This method is quick and simple and does not need any measurement device other than a simple digital color camera. We confirm that this method gives significantly better results than using default gamma settings for both LCD and DLP (Digital Light Processing) projectors.
We focus on the distribution of color patches in color space for the establishment of 3D LUT (Look Up Table)models. We propose a new accurate display color characterization model based on polyharmonic spline interpolation. Thismodel shows good results and is applied in real time for the accurate colorimetric rendering of multi-spectral images of art paintings viewed under virtual illuminants. We propose methods to build an optimized structure that permits to invert any display color characterization forward model. Several criteria linked with the grid itself or with an evaluation data set are tested. Our evaluation shows that in using our methods, we can achieve better results than with a regular equidistributed grid.
In a second part, we establish a basis for spatial color characterization via the quantitativeanalysis of the color shift and its spatial variation throughout the display area. We show thatthe spatial chromaticity shift is not negligible in some cases and that some features are spatiallyinvariant within one display of a given technology
BibTeX:
@phdthesis{Thomas2009a,
  author = {Jean-Baptiste Thomas},
  title = {Colorimetric characterization of displays and multi-display systems},
  month = {Oct},
  school = {Universit\acute{e} de bourgogne},
  year = {2009},
  keywords = {Display, multi-display system, display color characterization, display spatial
color uniformity}
}
BibTeX:
@inproceedings{Thomas2009,
  author = {Jean-Baptiste Thomas and Arne Magnus Bakke},
  title = {A colorimetric study of spatial uniformity in projection displays},
  booktitle = {Second International Workshop Computational Color Imaging (CCIW09)},
  address = {Saint-Etienne, France},
  month = {Mar},
  year = {2009},
  series = {Lecture Notes in Computer Science},
  volume = {5646},
  url = {http://www.springerlink.com/link.asp?id=105633}
}
BibTeX:
@article{Thomas2010,
  author = {Jean-Baptiste Thomas and Arne Magnus Bakke and Jeremie Gerhardt},
  title = {Spatial Nonuniformity of Color Features in Projection Displays: A Quantitative Analysis},
  journal = {Journal of Imaging Science and Technology},
  publisher = {IST},
  year = {2010},
  volume = {54},
  number = {3},
  pages = {030403},
  keywords = {brightness; colour displays; optical projectors},
  url = {http://link.aip.org/link/?IST/54/030403/1},
  doi = {10.2352/J.ImagingSci.Technol.2010.54.3.030403}
}
BibTeX:
@inproceedings{Thomas2008,
  author = {Jean-Baptiste Thomas and Philippe Colantoni and Jon Yngve Hardeberg and Irene Foucherot and Pierre Gouton},
  title = {An inverse display color characterization model based on an optimized geometrical structure},
  booktitle = {Color Imaging XIII: Processing, Hardcopy, and Applications},
  address = {San Jose, USA},
  month = {Jan},
  publisher = {SPIE},
  year = {2008},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {6807}
}
BibTeX:
@article{Thomas2008a,
  author = {Jean-Baptiste Thomas and Philippe Colantoni and Jon Y. Hardeberg and Ir\`{e}ne Foucherot and Pierre Gouton},
  title = {A geometrical approach for inverting display color characterization models},
  journal = {SID},
  year = {2008},
  volume = {16},
  number = {10},
  pages = {1021--1031}
}
BibTeX:
@article{Thomas2008b,
  author = {Jean-Baptiste Thomas and Jon Y. Hardeberg and Ir\`{e}ne Foucherot and Pierre Gouton},
  title = {The PLVC display color characterization model revisited},
  month = {Dec},
  journal = {Color Research \& Application},
  year = {2008},
  volume = {33},
  number = {6},
  pages = {449-460}
}
BibTeX:
@inproceedings{Thomas2007,
  author = {Jean-Baptiste Thomas and Jon Yngve Hardeberg and Irene Foucherot and Pierre Gouton},
  title = {Additivity based LC display color characterization},
  booktitle = {GCIS2007 Proceedings},
  year = {2007},
  pages = {50-55}
}
Abstract: In this chapter, we present the problem of cross-media color reproduction, that is,
how to achieve consistent reproduction of images in different media with different
technologies. Of particular relevance for the color image processing community is displays,
whose color properties have not been extensively covered in previous literature. Therefore,
we go more in depth concerning how to model displays in order to achieve colorimetric
consistency.
BibTeX:
@incollection{Thomas2013,
  author = {Thomas, Jean-Baptiste and Hardeberg, Jon Y and Tr{\'e}meau, Alain},
  title = {Cross-Media Color Reproduction and Display Characterization},
  booktitle = {Advanced Color Image Processing and Analysis},
  publisher = {Springer},
  year = {2013},
  pages = {81--118}
}
BibTeX:
@article{Tong2010,
  author = {Yubing Tong and Hubert Konik and Faouzi A. Cheikh and Alain Tremeau},
  title = {Full Reference Image Quality Assessment Based on Saliency Map Analysis},
  journal = {Journal of Imaging Science and Technology},
  publisher = {IST},
  year = {2010},
  volume = {54},
  number = {3},
  pages = {030503},
  keywords = {image recognition; set theory},
  url = {http://link.aip.org/link/?IST/54/030503/1},
  doi = {10.2352/J.ImagingSci.Technol.2010.54.3.030503}
}
Abstract: The main objective of this paper is to identify and disseminate good practice in quality assurance and enhancement as well as in teaching and learning at master level. This paper focuses on the experience of the Erasmus Mundus Master program CIMET (Color in Informatics and Media Technology). Amongst topics covered, we discuss the adjustments necessary to a curriculum designed for excellent international students and their preparation for a global labor market.
BibTeX:
@article{Tremeau2011,
  author = {Alain Tr{\'e}meau and Jon Hardeberg and Javier Hernandez-Andr{\`e}s and Juan Luis Nieves and Jussi Parkkinen},
  title = {An innovative {E}rasmus {M}undus {M}aster program in {C}olor in {I}nformatics and {M}edia {T}echnology},
  month = {June},
  journal = {Journal sur l'enseignement des sciences et technologies de l'information et des syst{\`e}mes},
  year = {2011},
  volume = {10},
  url = {http://www.j3ea.org/index.php?option=com_article&access=doi&doi=10.1051/j3ea/2011010&Itemid=129},
  doi = {10.1051/j3ea/2011010}
}
BibTeX:
@incollection{Wajid2014,
  author = {Wajid, Rameez and Mansoor, AtifBin and Pedersen, Marius},
  title = {A Human Perception Based Performance Evaluation of Image Quality Metrics},
  booktitle = {Advances in Visual Computing},
  publisher = {Springer International Publishing},
  year = {2014},
  series = {Lecture Notes in Computer Science},
  volume = {8887},
  pages = {303-312},
  url = {http://dx.doi.org/10.1007/978-3-319-14249-4_29},
  doi = {10.1007/978-3-319-14249-4_29}
}
Abstract: Subjective image quality evaluation, though time
consuming, is presently the most reliable evaluation
method. The aim of this research was to undertake an
independent study to investigate the similarity between a
psychophysical experiment and an existing image quality
database, where the experiments were undertaken in
geographically distant locations with racially dissimilar
population and varied number of subjects. The image
quality database obtained from Laboratory for Image and
Video Engineering (LIVE) was used to carry out
subjective evaluations. The Difference Mean Opinion
Scores (DMOS) were calculated from the raw scores and
were analyzed against those from LIVE. The results
indicate a high correlation between the two evaluations,
thus confirming a globally consistent human perception.
BibTeX:
@conference{Wajid2013,
  author = {Rameez Wajid and Atif Bin Mansoor and Marius Pedersen},
  title = {A study of human perception similarity for image quality assessment},
  booktitle = {Colour and Visual Computing Symposium (CVCS)},
  month = {Sept},
  publisher = {IEEE},
  year = {2013}
}
BibTeX:
@conference{Wang2014b,
  author = {C. Wang and R. Palomar and F. Alaya Cheikh},
  title = {STEREO VIDEO ANALYSIS FOR INSTRUMENT TRACKING IN LAPAROSCOPIC SURGERY},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {Dec},
  year = {2014}
}
BibTeX:
@incollection{Wang2014,
  author = {Wang, Congcong and Wang, Xingbo and Hardeberg, JonYngve},
  title = {A Linear Interpolation Algorithm for Spectral Filter Array Demosaicking},
  booktitle = {Image and Signal Processing},
  publisher = {Springer International Publishing},
  year = {2014},
  series = {Lecture Notes in Computer Science},
  volume = {8509},
  pages = {151-160},
  keywords = {Multispectral; Demosaicking; Linear; Residual},
  url = {http://dx.doi.org/10.1007/978-3-319-07998-1_18},
  doi = {10.1007/978-3-319-07998-1_18}
}
Abstract: Single-sensor colour imaging systems mostly employ a colour filter array (CFA). This enables the acquisition of a colour image by a single sensor at one exposure at the cost of reduced spatial resolution. The idea of CFA fit itself well with multispectral purposes by incorporating more than three types of filters into the array which results in multispectral filter array (MSFA). In comparison with a CFA, an MSFA trades spatial resolution for spectral resolution. A simulation was performed to evaluate the colorimetric performance of such CFA/MSFA imaging systems and investigate the trade-off between spatial resolution and spectral resolution by comparing CFA and MSFA systems utilising various filter characteristics and demosaicking methods including intra- and inter-channel bilinear interpolation as well as discrete wavelet transformed based techniques. In general, 4-band and 8-band MSFAs provide better or comparable performance than the CFA setup in terms of CIEDE2000 and S-CIELAB colour difference. This indicates that MSFA would be favourable for colorimetric purposes.
BibTeX:
@incollection{Wang2015a,
  author = {Wang, Xingbo and Green, Philip J. and Thomas, Jean-Baptiste and Hardeberg, JonY. and Gouton, Pierre},
  title = {Evaluation of the Colorimetric Performance of Single-Sensor Image Acquisition Systems Employing Colour and Multispectral Filter Array},
  booktitle = {Computational Color Imaging},
  publisher = {Springer International Publishing},
  year = {2015},
  series = {Lecture Notes in Computer Science},
  volume = {9016},
  pages = {181-191},
  keywords = {Colorimetric performance; Colour filter array; Multispectral imaging; Single-sensor},
  url = {http://dx.doi.org/10.1007/978-3-319-15979-9_18},
  doi = {10.1007/978-3-319-15979-9_18}
}
BibTeX:
@conference{Wang2014c,
  author = {X. Wang and M. Pedersen and J.-B. Thomas},
  title = {THE INFLUENCE OF CHROMATIC ABERRATION ON DEMOSAICKING},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {Dec},
  year = {2014}
}
BibTeX:
@inproceedings{Wang2013a,
  author = {Xingbo Wang and Jean?Baptiste Thomas and Jon Yngve Hardeberg},
  title = {A study on the impact of spectral characteristics of filters on multispectral image acquisition},
  booktitle = {12th Congress of the International Colour Association (AIC)},
  address = {Newcastle, UK},
  month = {July},
  year = {2013}
}
Abstract: The idea of colour filter array may be adapted to multispectral
image acquisition by integrating more filter types
into the array, and developing associated demosaicking algorithms.
Several methods employing discrete wavelet transform
(DWT) have been proposed for CFA demosaicking. In
this work, we put forward an extended use of DWT for multispectral
filter array demosaicking. The extension seemed
straightforward, however we observed striking results. This
work contributes to better understanding of the issue by
demonstrating that spectral correlation and spatial resolution
of the images exerts a crucial influence on the performance
of DWT based demosaicking.
BibTeX:
@conference{Wang2013b,
  author = {Xingbo Wang and Jean-Baptiste Thomas and Jon Hardeberg},
  title = {Discrete wavelet transform based multispectral filter array demosaicking},
  booktitle = {Colour and Visual Computing Symposium (CVCS)},
  month = {Sept},
  publisher = {IEEE},
  year = {2013}
}
Abstract: In every aspect, spectral characteristics of filters play an important role in an image acquisition system. For a colorimetric system, traditionally, it is believed that narrow-band filters give rise to higher accuracy of colour reproduction, whereas wide-band filters, such as complementary colour filters, have the advantage of higher sensitivity. In the context of multispectral image capture, the objective is very often to retrieve an estimation of the spectral reflectance of the captured objects. The literature does not provide a satisfactory answer to which configuration yields the best results. It is therefore of interest to verify which type of filters performs the best in estimating the reflectance spectra for the purpose of multispectral image acquisition. A series of experiments were conducted on a simulated imaging system, with six types of filters of varying bandwidths paired with three linear reflectance estimation methods. The results show that filter bandwidth exerts direct influence on the accuracy of reflectance estimation. Extremely narrowband filters did not perform well in the experiment and the relation between bandwidth and reflectance estimation accuracy is not monotonic. Also it is indicated that the optimal number of filters depends on the spectral similarity metrics employed.
BibTeX:
@article{Wang2014a,
  author = {Xingbo Wang and Jean-Baptiste Thomas and Jon Y Hardeberg and Pierre Gouton},
  title = {Multispectral imaging: narrow or wide band filters?},
  journal = {Journal of the International Colour Association (AIC)},
  year = {2014},
  volume = {12},
  pages = {44--51},
  url = {http://aic-colour-journal.org/index.php/JAIC/article/view/149}
}
Abstract: Inspired by the concept of the colour filter array (CFA), the research community has shown much interest in adapting the idea of CFA to the multispectral domain, producing multispectral filter arrays (MSFAs). In addition to newly devised methods of MSFA demosaicking, there exists a wide spectrum of methods developed for CFA. Among others, some vector based operations can be adapted naturally for multispectral purposes. In this paper, we focused on studying two vector based median filtering methods in the context of MSFA demosaicking. One solves demosaicking problems by means of vector median filters, and the other applies median filtering to the demosaicked image in spherical space as a subsequent refinement process to reduce artefacts introduced by demosaicking. To evaluate the performance of these measures, a tool kit was constructed with the capability of mosaicking, demosaicking and quality assessment. The experimental results demonstrated that the vector median filtering performed less well for natural images except black and white images, however the refinement step reduced the reproduction error numerically in most cases. This proved the feasibility of extending CFA demosaicking into MSFA domain.
BibTeX:
@inproceedings{Wang2013,
  author = {Xingbo Wang and Jean-Baptiste Thomas and Jon Yngve Hardeberg and Pierre Gouton},
  title = {Median filtering in multispectral filter array demosaicking},
  booktitle = {Digital Photography IX},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2013},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {8660},
  pages = {86600E},
  url = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=1568612}
}
BibTeX:
@inproceedings{Wang2010,
  author = {Zhaohui Wang and Anna Aristova and Jon Yngve Hardeberg},
  title = {Evaluating the Effect of Noise on 3D LUT-Based Color Transformations},
  booktitle = {5th European Conference on Colour in Graphics, Imaging, and Vision (CGIV)},
  address = {Joensuu, Finland},
  month = {June},
  year = {2010},
  pages = {88--93}
}
BibTeX:
@conference{Wang2010a,
  author = {Zhaohui Wang and Anna Aristova and Jon Yngve Hardeberg},
  title = {Quantifying Smoothness of the LUTs-based Color Transformations},
  booktitle = {31st International Congress on Imaging Science (ICIS)},
  address = {Beijing, China},
  month = {May},
  year = {2010}
}
BibTeX:
@article{Wang2012,
  author = {Zhaohui Wang and Jon Yngve Hardeberg},
  title = {Development of an adaptive bilateral filter for evaluating color image difference},
  journal = {Journal of Electronic Imaging},
  year = {2012},
  volume = {21},
  number = {2},
  url = {http://spiedigitallibrary.org/jei/resource/1/jeime5/v21/i2/p023021_s1?isAuthorized=no}
}
BibTeX:
@inproceedings{Wang2009,
  author = {Zhaohui Wang and Jon Yngve Hardeberg},
  title = {An adaptive Bilateral Filter for Predicting Color Image Difference},
  booktitle = {17th Color Imaging Conference},
  address = {Albuquerque, NM, USA},
  month = {Nov},
  year = {2009},
  pages = {27-31}
}
Abstract: Is there a relation between halftone measurements with densitometers (converted into tone value with the Murray-Davies-equation) and halftone measurements with dot meters in newspaper print? This is the basis for my thesis. The measuring devices used in this analysis are Spectrolino from GretagMacbeth (spectrophotometer used as a densitometer) and the dot meters CCDot (Centurfax/X-Rite), SpectroPlate (Techkon) and Lithocam (Troika Systems). Repeatability analysis was conducted for all of the measuring devices. The results indicated that Spectrolino was accurate according to industrial standards. All of the dot meters suffered from low repeatability in newspaper print. The measuring devices are separated into three combinations consisting of one dot meter and the densitometer (CCDot-Spectrolino, SpectroPlate-Spectrolino and Lithocam- Spectrolino). These combinations are analyzed separately. Using regression analysis the measurement data are fitted to second order polynomials. The results are given as estimates of the polynomial parameters, i.e. the polynomials give the relation between halftone measurements with one of the dot meters and halftone measurements with Spectrolino. The residuals between predicted and measured halftone values with Spectrolino are used to judge the suitability of the model. Due to the large uncertainty of the estimated parameters, the model do not accurately describe the relation. This is explained by the low repeatability for the dot meters in newspaper print. Factors that cause this low repeatability are emphasized in this report. Dot meters are not recommended for halftone measurements in newspaper print.
BibTeX:
@mastersthesis{Wroldsen2006,
  author = {Maria Sunde Wroldsen},
  title = {Densitrometriske og planimetriske m\r{a}linger av raster},
  school = {Gj{\o}vik University College},
  year = {2006},
  url = {http://www.colorlab.no/content/download/21935/215651/file/Maria_Wroldsen_Master_thesis.pdf}
}
BibTeX:
@article{Wroldsen2008,
  author = {Wroldsen, Maria Sunde and Nussbaum, Peter and Hardeberg, Jon Yngve},
  title = {A comparison of densitometric and planimetric techniques for newspaper printing},
  journal = {TAGA Journal},
  year = {2008},
  volume = {4}
}
BibTeX:
@inproceedings{Wroldsen2007,
  author = {Maria Sunde Wroldsen and Peter Nussbaum and Jon Yngve Hardeberg},
  title = {Densitometric and Planimetric Measurement Techniques for Newspaper Printing},
  booktitle = {TAGA Proceedings},
  year = {2007},
  pages = {273-290}
}
BibTeX:
@article{Yubing2011,
  author = {Yubing, Tong and Cheikh, FaouziAlaya and Guraya, FahadFazalElahi and Konik, Hubert and Trémeau, Alain},
  title = {A Spatiotemporal Saliency Model for Video Surveillance},
  journal = {Cognitive Computation},
  publisher = {Springer-Verlag},
  year = {2011},
  volume = {3},
  number = {1},
  pages = {241-263},
  keywords = {Visual saliency; Motion saliency; Background subtraction; Center-surround saliency; Face detection; Video surveillance},
  url = {http://dx.doi.org/10.1007/s12559-010-9094-8},
  doi = {10.1007/s12559-010-9094-8}
}
BibTeX:
@conference{Zewdie2014,
  author = {C. Zewdie and M. Pedersen and Z. Wang},
  title = {A NEW POOLING STRATEGY FOR IMAGE QUALITY METRICS: FIVE NUMBER SUMMARY},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {Dec},
  year = {2014}
}
BibTeX:
@inproceedings{Zhao2015,
  author = {Ping Zhao and Marius Pedersen},
  title = {Extending subjective experiments for image quality assessment with baseline adjustments},
  booktitle = {Image Quality and System Performance XII},
  address = {San Francisco, CA, USA},
  month = {Feb},
  publisher = {SPIE},
  year = {2015},
  series = {Proceedings of SPIE/IS\&T Electronic Imaging},
  volume = {9396},
  pages = {9396-27}
}
BibTeX:
@conference{Zhao2014,
  author = {Ping Zhao and Marius Pedersen and Jon Yngve Hardeberg},
  title = {Image registration for quality assessment of projection displays},
  booktitle = {IEEE International Conference on Image Processing (ICIP)},
  address = {Paris, France},
  publisher = {IEEE},
  year = {2014}
}
BibTeX:
@conference{Zhao2013,
  author = {Ping Zhao and Marius Pedersen and Jon Yngve Hardeberg and Jean-Baptiste Thomas},
  title = {Camera-based measurement of relative image contrast in projection displays},
  booktitle = {European Workshop on Visual Information Processing (EUVIP)},
  address = {Paris, France},
  month = {June},
  year = {2013}
}
Abstract: Spatial uniformity is one of the most important image quality attributes in visual experience of displays. In conventional researches, spatial uniformity was mostly measured with a radiometer and its quality was assessed with non-reference image quality metrics. Cameras are cheaper
than radiometers and they can provide accurate relative measurements if they are carefully calibrated. In this paper, we propose and implement a work-flow to use a calibrated camera as a relative acquisition device of intensity to measure the spatial uniformity of projection displays. The
camera intensity transfer functions for every projected pixels are recovered, so we can produce multiple levels of linearized non-uniformity on the screen in the purpose of image quality assessment. The experiment results suggest that our work-flow works well. Besides, none of the frequently
referred uniformity metrics correlate well with the perceptual results for all types of test images. The spatial non-uniformity is largely masked by the high frequency components in the displayed image content, and we should simulate the human visual system to ignore the non-uniformity that
cannot be discriminated by human observers. The simulation can be implemented using models based on contrast sensitivity functions, contrast masking, etc.
BibTeX:
@inproceedings{Zhao2014a,
  author = {Ping Zhao and Marius Pedersen and Jean-Baptiste Thomas and Jon Y. Hardeberg},
  title = {Perceptual Spatial Uniformity Assessment of Projection Displays with a Calibrated Camera},
  booktitle = {The 22nd Color and Imaging Conference (CIC)},
  address = {Boston, MA, USA},
  month = {Nov},
  publisher = {IS\&T},
  year = {2014},
  pages = {159--164},
  url = {http://www.ingentaconnect.com/content/ist/cic/2014/00002014/00002014/art00028}
}
BibTeX:
@incollection{Ziko2014,
  author = {Ziko, Imtiaz Masud and Beigpour, Shida and Hardeberg, Jon Yngve},
  title = {Design and Creation of a Multi-illuminant Scene Image Dataset},
  booktitle = {Image and Signal Processing},
  publisher = {Springer International Publishing},
  year = {2014},
  series = {Lecture Notes in Computer Science},
  volume = {8509},
  pages = {531-538},
  keywords = {Color Constancy; Multi-illuminant Dataset},
  url = {http://dx.doi.org/10.1007/978-3-319-07998-1_61},
  doi = {10.1007/978-3-319-07998-1_61}
}
BibTeX:
@proceedings{Hardeberg2011b,,
  title = {Gj{\o}vik Color Imaging Symposium},
  address = {Gj{\o}vik, Norway},
  year = {2011},
  series = {Gj{\o}vik University College Report Series},
  number = {6},
  note = {ISSN 1890-520X},
  url = {http://brage.bibsys.no/hig/handle/URN:NBN:no-bibsys_brage_30196}
}
BibTeX:
@proceedings{Simone2009a,,
  title = {Gj{\o}vik Color Imaging Symposium},
  address = {Gj{\o}vik, Norway},
  month = {June},
  year = {2009},
  series = {Gj{\o}vik University College Report Series},
  number = {4},
  note = {ISSN 1890-520X},
  url = {http://brage.bibsys.no/hig/bitstream/URN:NBN:no-bibsys_brage_9313/3/sammensatt_elektronisk.pdf}
}

Please report errors to Raju Shrestha. Updated on 28/06/2015.