Image Analysis Tools
Digital cameras can be powerful tools for measuring colours and patterns in a huge range of disciplines. However, in normal ‘uncalibrated’ digital photographs the pixel values do not scale linearly with the amount of light measured by the sensor. This means that pixel values cannot be reliably compared between different photos or even regions within the same photo unless the images are calibrated to be linear and have any lighting changes controlled for. Some scientists are aware of these issues, but lack convenient, user-friendly software to work with calibrated images, while many others continue to measure uncalibrated images. We have developed a toolbox that can calibrate images using many common consumer digital cameras, and for some cameras the images can be converted to “animal vision”, to measure how the scene might look to non-humans. Many animals can see down into the ultraviolet (UV) spectrum, such as most insects, birds, reptiles, amphibians, some fish and some mammals, so it is important to measure UV when working with these animals. Our toolbox can combine photographs taken through multiple colour filters, for example allowing you to combine normal photographs with UV photographs and convert to animal vision across their whole range of sensitivities.
Download & Install
This toolbox requires a working installation of ImageJ. Download the version of the toolbox for your operating system, unzip the files and place them in your imagej/plugins folder. See the user guide for more specific details.
- micaToolbox version 1.22 Windows (don’t install or unzip ImageJ to Program Files, place it elsewhere, e.g. documents)
- micaToolbox version 1.22 Linux
- micaToolbox version 1.22 Mac (note you might need to recompile DCRAW for macs, see the user guide. MacOS 10.12 Sierra users, you need to copy the ImageJ.app file out of the ImageJ directory and paste it back in to get plugins to work. See here).
- User Guide version 1.11
- Paper (Open access, full version)
- Sample MSPEC images
- Using MacOS Sierra some of the functions don’t appear in the plugins menu. This is due to something called path randomisation (see here): “You can disable path randomization by moving ImageJ.app out of the ImageJ folder and then copying it back. If the ImageJ folder is in /Applications you will need to hold down the alt key while dragging ImageJ.app out of the ImageJ folder.”
- I get an error when the program tries to load an image. Make sure you haven’t ticked the “non-raw” box. This box enables the software to work with non-RAW images, such as JPG or TIFF images. But you need to model the camera’s linearity function first.
- Can this software be rolled out for Android, iOS, Chromebook, or cloud processing? This could be done, but it would mean mapping the linearisation curves and spectral sensitivities of every different mobile phone camera, so in practice wouldn’t be worth doing. Mobile phone cameras are also unable to photograph in UV, which means only a limited number of animal visual systems could be mapped to. Phones normally only output compressed images, adds extra problems (e.g. compression artefacts and reduced colour gamut compared to RAW images).
- Can I open MSPEC images in photoshop/anything else? No. MSPEC images are just text files that tell the toolbox how to open the RAW file(s) correctly (performing calibration and alignment). MSPEC images are opened as 32-bits per channel images, and many other software packages won’t support images with this level of detail and display them correctly if you save them in this format. Use the “Make Presentation Image” tool to produce a simple RGB colour image for saving as a standard 8-bits per channel colour image.
- Why are MSPEC images shown in black & white by default? Each channel of output from the camera is displayed in a separate (achromatic) slice. Scroll between the slices to see them and measure them separately (though you can use the “plugins>measure>measure all slices” tool to measure all slices more easily). This is because the toolbox can deal with more than three channels (which would be impossible to display in colour). You can make a colour or false-colour image for presentation with the tools included (see above post).
- Where can I buy the Iwasaki eyeColor bulb? In the UK you can get them here. You also need to run this lamp from a ballast (ask your bulb supplier about this), and it needs wiring together. The wiring is straightforward, but ask an electrician if you’re not comfortable. There are many other potential light sources that might work for full-spectrum UV photography, e.g. the Exo Terra SunRay Metal Halide lamp, which will be ready to use for UV straight out of the box, but care should be taken given this is a focussed bulb, and also has very high UVB output (and is therefore more dangerous to work with than the eyeColor bulb). This focus also makes controlled illumination more difficult given the standard must receive the same light as your sample. See the user guide for further lighting discussion.
- The MSPEC images are really dark – it’s difficult to see the thing I’m selecting. Linear MSPEC images make dark things look really dark. When you make your mspec images select the “Visual 32-bit” option if you’re just working with normal visible images, or “Pseudo-UV 32-bit” if you’re working with UV & visible. This will show you non-linear colour images that look good on your monitor. The pseudo-colour UV images will show misalignment as the blue channel not matching up with the others. Remember not to measure the colours in these images – they’re just there to help you see everything more easily – use the batch measurement tool to measure these images afterwards.
- When generating a new Cone-catch model an error crops up complaining about compilation failing. Make sure the name of the camera you’ve added doesn’t start with numbers (e.g. “7D Canon” might not work, try “Canon 7D” instead). This is a known bug, but quite easy to remedy. It results in a “.java” file being created, but no accompanying “.class” file.
- Why do I have negative reflectance values when using two or more standards? Negative reflectance values are obviously a physical impossibility, but you would actually expect them from the image processing under certain circumstances. For example, camera sensors have inherent noise (which goes up with gain, dependent on the ISO setting), so if the camera is photographing something that’s got a very low reflectance then you would expect some pixels to go below zero due to this noise (it’s slightly more complicated than this as there’s also the bounding effect around zero values). If you are recording mean values over lots of pixels as being negative something has gone wrong with your standards, e.g. your dark standard’s reflectance is actually higher than the number you’ve used, or there is some degree of lighting difference or lens flare over the standard that isn’t over the thing you’re measuring. One way to get around this issue is to use the ‘estimate black point’ option instead of or as well as using a dark standard, or, if lens-flare isn’t likely to be an issue then only use one standard and don’t use the ‘estimate black point’. If you’re getting any noticeable level of lens flare over the standard the photo should not be used.
- When opening mspec images there is a DCRAW error. Make sure there are no spaces in the file names of the raw images as this can sometimes cause issues.
- 31/7/2017 – Version 1.22, the GabRat disruptive coloration tool has been added to the toolbox (see here for an explanation of what it measures) use the tool by drawing around your target object with an ROI, then go “plugins>Measure>Measure GabRat Disruption”.
- 28/4/2017 – Additional cameras added (e.g. Sony A7 with kit 28-70mm lens). A tool for converting from CIEXYZ (cone-catch) to CIELAB (useful for various analyses). The toolbox may now support FIJI (up to now it’s only worked with normal ImageJ), though on linux there are compilation issues with FIJI and Java 1.8. There are also a few little bug fixes (e.g. when loading an mspec image it used the settings chosen previously rather than the new settings).
- 14/11/2016 – Lots of big updates with V1.2; There is now support for non-linear 24-bit images (such as standard 8 bits/channel JPG images). However, you need access to grey standards of known reflectance to generate a linearisation model. The cone-mapping models are now generated by using the JAMA library, so R is no longer required. The cone-mapping models can now accept different illuminant spectra for photography and model conditions.
- 15/03/2015 – Bug fixed when generating cone catch models in ImageJ version 1.50 or greater, causing an “IllegalArgumentException: “Label” column not found” error. Loading multispectral images now offers the option of converting to colour if you don’t want to measure the image. An option has also been added to the batch analysis tool to allow scaling all images equally irrespective of scale bars (useful for scaling down noisy images).
- 14/12/2015 – Bug in the luminance JND calculation fixed, and a separate tool for calculating luminance JNDs following Siddiqi et al 2004 created. Camera sensitivity files were also renamed to remove any starting with numbers (this stopped compilation working correctly).
- 6/10/2015 – Photo screening portrait image fix.
- 5/9/2015 – DCRAW for Mac problem fixed (see user guide) – many thanks to Matt Henry.
- 7/8/2015 – Addition of photo screening tools, providing photo preview and exposure testing, and easy creation of MSPEC images. Bug fix to JND measurement tool.
- 29/7/15 – Bug fix – Generate Cone Catch Model wasn’t working on Windows (tested on Windows 7)