Image Analysis Tools

Multispectral Image Calibration and Analysis Toolbox

Digital cameras can be powerful tools for measuring colours and patterns in a huge range of disciplines. However, in normal ‘uncalibrated’ digital photographs the pixel values do not scale linearly with the amount of light measured by the sensor. This means that pixel values cannot be reliably compared between different photos or even regions within the same photo unless the images are calibrated to be linear and have any lighting changes controlled for. Some scientists are aware of these issues, but lack convenient, user-friendly software to work with calibrated images, while many others continue to measure uncalibrated images. We have developed a toolbox that can calibrate images using many common consumer digital cameras, and for some cameras the images can be converted to “animal vision”, to measure how the scene might look to non-humans. Many animals can see down into the ultraviolet (UV) spectrum, such as most insects, birds, reptiles, amphibians, some fish and some mammals, so it is important to measure UV when working with these animals. Our toolbox can combine photographs taken through multiple colour filters, for example allowing you to combine normal photographs with UV photographs and convert to animal vision across their whole range of sensitivities.

The new home of the micaToolbox, user guide, and community support  is www.empiricalimaging.com

FlowerLayers

This toolbox requires a working installation of ImageJ. Download the version of the toolbox for your operating system, unzip the files and place them in your imagej/plugins folder. See the user guide for more specific details.

These downloads are provided as archives – go to www.empiricalimaging.com for the latest versions with loads more functions.

Tutorial Video (note the new site and videos here are more relevant)

FAQs

  • Using MacOS Sierra some of the functions don’t appear in the plugins menu. This is due to something called path randomisation (see here): “You can disable path randomization by moving ImageJ.app out of the ImageJ folder and then copying it back. If the ImageJ folder is in /Applications you will need to hold down the alt key while dragging ImageJ.app out of the ImageJ folder.”
  • I get an error when the program tries to load an image. Make sure you haven’t ticked the “non-raw” box. This box enables the software to work with non-RAW images, such as JPG or TIFF images. But you need to model the camera’s linearity function first.
  • Can this software be rolled out for Android, iOS, Chromebook, or cloud processing? This could be done, but it would mean mapping the linearisation curves and spectral sensitivities of every different mobile phone camera, so in practice wouldn’t be worth doing. Mobile phone cameras are also unable to photograph in UV, which means only a limited number of animal visual systems could be mapped to. Phones normally only output compressed images, adds extra problems (e.g. compression artefacts and reduced colour gamut compared to RAW images).
  • Can I open MSPEC images in photoshop/anything else? No. MSPEC images are just text files that tell the toolbox how to open the RAW file(s) correctly (performing calibration and alignment). MSPEC images are opened as 32-bits per channel images, and many other software packages won’t support images with this level of detail and display them correctly if you save them in this format. Use the “Make Presentation Image” tool to produce a simple RGB colour image for saving as a standard 8-bits per channel colour image.
  • Why are MSPEC images shown in black & white by default? Each channel of output from the camera is displayed in a separate (achromatic) slice. Scroll between the slices to see them and measure them separately (though you can use the “plugins>measure>measure all slices” tool to measure all slices more easily). This is because the toolbox can deal with more than three channels (which would be impossible to display in colour). You can make a colour or false-colour image for presentation with the tools included (see above post).
  • UV radiation warning signWhere can I buy the Iwasaki eyeColor bulb? In the UK you can get them here. You also need to run this lamp from a ballast (ask your bulb supplier about this), and it needs wiring together. The wiring is straightforward, but ask an electrician if you’re not comfortable. There are many other potential light sources that might work for full-spectrum UV photography, e.g. the Exo Terra SunRay Metal Halide lamp, which will be ready to use for UV straight out of the box, but care should be taken given this is a focussed bulb, and also has very high UVB output (and is therefore more dangerous to work with than the eyeColor bulb). This focus also makes controlled illumination more difficult given the standard must receive the same light as your sample. See the user guide for further lighting discussion.
  • The MSPEC images are really dark – it’s difficult to see the thing I’m selecting. Linear MSPEC images make dark things look really dark. When you make your mspec images select the “Visual 32-bit” option if you’re just working with normal visible images, or “Pseudo-UV 32-bit” if you’re working with UV & visible. This will show you non-linear colour images that look good on your monitor. The pseudo-colour UV images will show misalignment as the blue channel not matching up with the others. Remember not to measure the colours in these images – they’re just there to help you see everything more easily – use the batch measurement tool to measure these images afterwards.
  • When generating a new Cone-catch model an error crops up complaining about compilation failing. Make sure the name of the camera you’ve added doesn’t start with numbers (e.g. “7D Canon” might not work, try “Canon 7D” instead). This is a known bug, but quite easy to remedy. It results in a “.java” file being created, but no accompanying “.class” file.
  • Why do I have negative reflectance values when using two or more standards? Negative reflectance values are obviously a physical impossibility, but you would actually expect them from the image processing under certain circumstances. For example, camera sensors have inherent noise (which goes up with gain, dependent on the ISO setting), so if the camera is photographing something that’s got a very low reflectance then you would expect some pixels to go below zero due to this noise (it’s slightly more complicated than this as there’s also the bounding effect around zero values). If you are recording mean values over lots of pixels as being negative something has gone wrong with your standards, e.g. your dark standard’s reflectance is actually higher than the number you’ve used, or there is some degree of lighting difference or lens flare over the standard that isn’t over the thing you’re measuring. One way to get around this issue is to use the ‘estimate black point’ option instead of or as well as using a dark standard, or, if lens-flare isn’t likely to be an issue then only use one standard and don’t use the ‘estimate black point’. If you’re getting any noticeable level of lens flare over the standard the photo should not be used.
  • When opening mspec images there is a DCRAW error. Make sure there are no spaces in the file names of the raw images as this can sometimes cause issues.

Recent Changes:

  • 31/7/2017 – Version 1.22, the GabRat disruptive coloration tool has been added to the toolbox (see here for an explanation of what it measures) use the tool by drawing around your target object with an ROI, then go “plugins>Measure>Measure GabRat Disruption”.
  • 28/4/2017 – Additional cameras added (e.g. Sony A7 with kit 28-70mm lens). A tool for converting from CIEXYZ (cone-catch) to CIELAB (useful for various analyses). The toolbox may now support FIJI (up to now it’s only worked with normal ImageJ), though on linux there are compilation issues with FIJI and Java 1.8. There are also a few little bug fixes (e.g. when loading an mspec image it used the settings chosen previously rather than the new settings).
  • 14/11/2016 – Lots of big updates with V1.2; There is now support for non-linear 24-bit images (such as standard 8 bits/channel JPG images). However, you need access to grey standards of known reflectance to generate a linearisation model. The cone-mapping models are now generated by using the JAMA library, so R is no longer required. The cone-mapping models can now accept different illuminant spectra for photography and model conditions.
  • 15/03/2015 – Bug fixed when generating cone catch models in ImageJ version 1.50 or greater, causing an “IllegalArgumentException: “Label” column not found” error.  Loading multispectral images now offers the option of converting to colour if you don’t want to measure the image. An option has also been added to the batch analysis tool to allow scaling all images equally irrespective of scale bars (useful for scaling down noisy images).
  • 14/12/2015 – Bug in the luminance JND calculation fixed, and a separate tool for calculating luminance JNDs following Siddiqi et al 2004 created. Camera sensitivity files were also renamed to remove any starting with numbers (this stopped compilation working correctly).
  • 6/10/2015 – Photo screening portrait image fix.
  • 5/9/2015 – DCRAW for Mac problem fixed (see user guide) – many thanks to Matt Henry.
  • 7/8/2015 – Addition of photo screening tools, providing photo preview and exposure testing, and easy creation of MSPEC images. Bug fix to JND measurement tool.
  • 29/7/15 – Bug fix – Generate Cone Catch Model wasn’t working on Windows (tested on Windows 7)

133 Comments

  1. August 7
    Reply

    FWIW, Google Mail blocks d/l of your MicaToolBox zip files … as well as .exe files.
    Just so ya know. No response expected.
    Wish I could’ve used your research, I raise bees. & have a LOT of wildlife on my propety.
    Best wishes

    • Matt
      August 31
      Reply

      What does Google Mail (Gmail) have to do with downloading the software provided from the links above? Are you trying to email the software to yourself?

  2. Kait
    August 7
    Reply

    This is amazing! How long did it take to make this and how did you get the vision of an animal?

    • jolyon
      August 7
      Reply

      Thanks! The toolbox has accumulated over the past couple of years as I’ve been working on camouflage in our lab. Knowing how the vision of other animals works is based on decades of research by other researchers though. Determining an animal’s sensitivity to different wavelengths is very difficult, and requires microspectrophotometry of the cone cells in the retina, or flickering specific wavelengths of light at cone cells and seeing how they respond.

  3. jim
    August 8
    Reply

    Is a chromebook cloud verzon in the works?

    • jolyon
      August 8
      Reply

      I’ve not got any plans to expand to other operating systems as the vast majority of users will use windows, mac or linux. Though you could install linux on your chromebook.

      • Jorge
        August 15
        Reply

        Hey sir, you got a pretty good thing going here, but you need to consider about expanding.. I think this type of software would make a pretty good Android app, I’m just saying.
        Possibly create a test (free) version..
        And if the people like it they could download a full (paid) version.
        I know I would..
        Besides there’s a lot more people that work on camouflage, and i can see them really using this type of software to do some on-the-go image analyzing..
        I can see it being very successful.

        • jolyon
          August 18
          Reply

          I want to make these tools free for all scientists, so am keen to leave it as a free, open-source project.

  4. Ardy Bee
    August 8
    Reply

    So you’re not going to make this available for Android? What a shame……

    • jolyon
      August 11
      Reply

      Sorry – I don’t have the ability to maintain and calibrate all the phone cameras out there right now! It would be possible for working with non-UV images though.

  5. We are interested in collaborations to assist with mapping gradient color sensitivities to compare differences among various hymenoptera.
    Honey Bee Research Institute and Nature Center Inc., 24 Gendreau Road, Saint David, Maine 04756

    • jolyon
      August 11
      Reply

      Do send me an email (jt at jolyon dot co dot uk).

  6. Elena
    August 9
    Reply

    Waiting for an android version 🙂

  7. Randall badilla
    August 9
    Reply

    just amazing… using this sfw and the knowledge behind this we can push further nor how animals see the world also how other people with eyes diseases live with us.

  8. Randall Lee Reetz
    August 10
    Reply

    Is there a place online where one could upload a digital image and download a conversion of that image to insect vision?

    • jolyon
      August 11
      Reply

      For now you’ll have to use the software. It’s pretty fast at processing through, and should work on most platforms.

  9. Randall Lee Reetz
    August 10
    Reply

    I’ve been seriously injured on bicycle rides as a result of insect bites. Wasps and bees seem particularly interested in helmets I ware. I was wondering if there would be a way to submit a series of images of various helmets (in verious colors and textures) through your insect vision filters such that I might see which would be less visible to the sorts of insects that can cause havoc to a bicyclist? The problem of course is that helmets are best when they are very visible to other riders and to motorists and pedestrians (and large wildlife (deer, coyote, dogs, mountain lions, squirrels, and birds. Is there a color or range of colors that might be invisible or less visible to insects and more visible and even alarming to humans and other animals that may be on roads and trails?

    Thanks, Randall Lee Reetz

    • jolyon
      August 11
      Reply

      That’s a really interesting question! I imagine the plastics used in most cycle helmets absorbs UV (is black in UV), but if there are UV reflective parts that could make them attract pollinators. I guess most bees and wasps that get stuck in helmets are accidental as you shoot past them though. Wasps aren’t pollinators, so shouldn’t be attracted by colour quite as much as bees.

      • Callen Peter
        August 27
        Reply

        Yes, wasps are pollinators! Very important ones at that. So making clothing and helmets which are not attractive to bees and wasps would be a great application of this technology.

  10. Randall Lee Reetz
    August 10
    Reply

    In addition, bees, flies, moths, and wasps can fly into the open mouths of riders. It is impractical of course to cover the mouth with anything that may obstruct a cyclist’s ability to breath freely. What might be done with colors and patters to keep insects away from a rider’s mouth?

  11. August 10
    Reply

    I want to download this software because I am interested in this application of carácter scientific

  12. Ian prins
    August 10
    Reply

    I read you were not planing on expanding to other platforms, but this looks like an extreme good concept for a mobile app.
    To give a true animal view of the world to have active rendering.
    I dont know how much proces power it takes to make the adjustments but il throw it out there.

    • jolyon
      August 11
      Reply

      The trouble with using mobile phone cameras is that they won’t produce raw (linear) output, so every phone would need to be carefully calibrated for linearity and its sensitivity curves. So while it would be possible it would be a lot of work to maintain.

  13. bambi
    August 10
    Reply

    Hi Jolyon: I was directed to your website via gizmag. I am a honey bee keeper and I would love to make the fantastic images you have of what objects may look through a bees eyes. I have watched your over 1 hour detailed tutorial, but being a lazy Australian, I was wondering if there was a less complex way of doing it? *smiles*

    • jolyon
      August 11
      Reply

      I’m afraid bees have UV vision, so you’d need a UV camera (which makes things far more complicated!)

  14. LEONID
    August 12
    Reply

    Unreal thanks! With love from Russia <3<3<3

  15. Stephanie
    August 14
    Reply

    Can you make this into a free app for android phones? Or is it not possible

  16. Subcomputer
    August 14
    Reply

    Very nice! Is there enough data so far on Trachemys scripta elegans or Chrysemys picta (both often miscategorized back/forth and with Pseudemys) in order to add one of them at some time? They seem do have some heritage in the field, and I’d like to see how their perception of self and food items works out.

    • jolyon
      August 19
      Reply

      I’m not sure, you might have to go with some other related species likely to have similar vision (e.g. turtles that have had their spectral sensitivities measured). They’re likely to be tetrachromats, but who knows what filtering their oil droplets might be doing…

  17. marios
    August 15
    Reply

    This could be a great app for smartphones.

  18. Specta
    August 15
    Reply

    When will it be possible to use it with smartphones ….android .uos and windows phones

  19. jolyon
    August 18
    Reply

    Hi, currently I don’t have the time to create a standalone program, and ImageJ has loads of built in tools that make it a great option for dealing with this image processing. If you watch the video you’ll see it’s pretty easy to install and use.

  20. Wilhelm
    August 21
    Reply

    No good impossible to set up in Windiws xp 🙁

    • jolyon
      August 24
      Reply

      Is it DCRAW that doesn’t work on XP? It should be possible to compile a version XP, though I don’t have an XP system available I’m afraid.

  21. Heather
    August 29
    Reply

    Great idea… And keep it open source and free, for sure. What colors are cats attracted to? Do they see in UV?

    • jolyon
      August 31
      Reply

      Cats probably just see in blue and yellow, though their eyes are adapted for night/low light vision with a higher ratio of rods to cones than many other mammals. So their colour vision is probably not all that great.

  22. Vinicius
    October 11
    Reply

    Hi Jolyon

    Where can I find the white and gray standards you used in the tutorial video.

    Thanks a lot

    • jolyon
      October 12
      Reply

      Hi Vinicius, I made that standard. I should start selling them really, though I don’t have the time to make them. The big labsphere standards are too bulky to use with most macro photography, so these little twin square ones are great.

  23. Alfonso Aceves
    November 22
    Reply

    Hi, congratulations is a great tool with many applications. I belong to a study group with Luis Robledo and we have been experimenting with the Toolbox, but last time, we had some issues when trying to generate the cone mapping model the log window gave this in Mac computers:
    Waiting for R to process data…
    Operating system: Mac
    Command: Rscript /Users/AlfonsoAceves/rScript.r
    Cannot run program “Rscript”: error=2, No such file or directory

    Have you run with this issue before? any recommendation?
    i’ll appreciate your response, have a good day,

    A

    • jolyon
      November 25
      Reply

      Hi Alfonso, Thanks! I’m not sure what’s causing that error. I don’t have access to a mac, so can’t easily debug these things. What version are you running? In the meantime you could generate the mapping files with a windows or linux machine and then copy them across to the mac. Cheers, Jolyon

  24. chen
    December 10
    Reply

    amazing works , jolyon,
    here is a simple question, where I got some other camera or lens sensitivity data?
    thank you

    • jolyon
      December 10
      Reply

      Thanks Chen! Calculating a camera’s spectral sensitivity isn’t particularly easy. The easiest way is probably with a monochromator and spectroradiometer. Shine monochromated light onto a diffuse white standard. Take a photo of the standard at a fixed/known shutter speed, then measure the radiance of that sample, then repeat at e.g. 5nm intervals across the whole spectral range of the camera (e.g. 400-700nm for normal cameras). You then compare the pixel values (multiplied by exposure time if different exposures were used) to the known radiance at each wavelength to build up the spectral sensitivity. You need to ensure you’re working with linear pixel values when doing this though (very important)!

  25. Amanda
    December 10
    Reply

    Hi there,
    Excellent program! Such a great idea and the instructional video is so good.
    I’m having an issue with creating a cone catch model. Each time I try to generate a new model (using the data which come with the package) I get two error messages pop up. One says “Plugin or class not found: “Cone Model Name” (java.lang.ClassNotFoundException: Cone Model Name)” And then a second pops up saying “Macro Error, No window with title “Exception” found”.
    I don’t know much C++, but I added in print commands and it seems to run line 751 “run(“Compile and Run…”, compileString);” but then gets stuck. There is no .class file created in the Cone Models folder, but a .java file is created.
    I’ve tried reinstalling ImageJ not in Program Files (incase it was an access issue) but no luck. I’m using my own laptop running Windows 10, R 3.2.2, Java 1.8.0. .
    Does anyone have any suggestions? I’ve run out of ideas!
    Thanks so much!
    Amanda.

    • jolyon
      December 10
      Reply

      Hi Amanda, I think someone else possibly ran into this issue (is the code creating a “.java” file in plugins/Cone Models/, but no “.class” file?). In the end it turned out that the compiling commands don’t like the files to start with numbers or have certain characters in the name. Try changing the name of your camera in plugins/Cone Mapping/Cameras to take out any numbers at the start of the file name. Do let me know if this solves your issues!

  26. Kristen
    February 5
    Reply

    Hi! I am a PhD student In California, studying UV floral patterns in annual wildflowers. Whenever I try to generate a multispectral image, I get to the prompt “Select Visible Photo containing standard” and select a raw vis photo with a standard, and it says ‘Lookup Thread terminated with code 1. Cannot decode file __(the file name).rw2’. Any suggestions on where the problem is? Thank you so much for making this tutorial and software freely available!!! It’s been a huge help already.-Kris

    • jolyon
      February 8
      Reply

      Hi Kristen, sorry you’ve been having issues. I think I might have encountered this issue sporadically, and have never been able to replicate it. It seems to be due to DCRAW playing up, and I always found restarting ImageJ sorted it. I recommend trying to open ImageJ with a different version of Java. Do let me know if this helps, and feel free to email if it doesn’t.

  27. Cedric
    February 29
    Reply

    Hi Jolyon. I have the same issue as amanda (“Macro Error, No window with title “Exception” found”). That’s the only prompt that appears when I tried to create a model using your nikkon 300D with 60mm lens (400-700), D65 (400-700), gobiusculus flavescens, natural spectra 400-700, n=3, no stepwise, no square transforms and including diagnostic plots. The program creates both files (.java and .class) and the model becomes available to use when refreshing the menues. When I change the camera name in the plugin folder (no capital letters, replacing space with underscore) I get both error messages Amanda describes. I am using a Win10 64-Bit system, ImageJ (64-Bit) v.1.49, installed in programmes (x86), Java 1.8.0_73.

    • jolyon
      March 1
      Reply

      Hi Cedric, thanks for the info. I’ll try to fix this issue. At least the model is created ok, but it’s annoying to have error windows. Cheers, Jolyon

  28. John Cavagnaro
    March 8
    Reply

    1. What nikkor 105 lens are you using for the spectral sensitivity curves in your paper (just called nikkor 105mm)? the UV nikkor f4.5, or the f2.8 vr micro-nikkor? Is spectral sensitivity not mapped for the UV-nikkor on the d7000?

    2. In the full spectrum converted d7000s, does the quartz filter have an AR coating of any kind, and would this affect the spectral sensitivity curves?

    • John Cavagnaro
      March 8
      Reply

      I’ve also read that the coastal optics 105 is modelled on the UV nikkor, would they have the same spectral transmittance?

      also AR refers to anti-reflective coating, which generally increases transmittance for a lens/filter.

      • jolyon
        March 8
        Reply

        Hi, in our paper we refer to the standard 105mm F/2.8 micro-nikkor lens. The UV-Nikkor has only just gone into production again hasn’t it? We don’t have one in our lab, instead using the Ocean Optics lenses. I’m not sure whether they have any AR coatings, though I suspect they don’t, as (according to my limited knowledge of these coatings) they tend to only work in a limited spectral range (i.e. not across both UV and visible). I would guess that the UV-Nikkor would result in very similar spectral sensitivities to the Ocean Optics lenses. The main thing affecting the spectral sensitivities with different lenses is the UV transmittance cut-off, but given these lenses both transmit well down below 300nm (beyond the sensor sensitivity) I think they will be very similar. If you have access to a spectrometer you could see whether the D7000 with UV-Nikkor (but modelled with the Ocean Optics lens) produces the correct cone catch quanta from a sample of colours (as we did in our paper).

  29. Andy
    June 22
    Reply

    Hi Jolyon! I adapted a Samsung NX1000 according to your manual and it works just fine. The only thing is that the pure ptfe, that I tried to use as a standard, is always overexposed in the redUV channel. The very far right bar in the histogram is always indicated, no matter how underexposed is the rest of the photo (I use the same baader UV venus filter).. Is it a problem of the camera, or is the ptfe supposed to be that much reflective? Is there any way to reduce its reflectance a bit, but maintain it flat? Unfortunately, I have no spectrophotometer now to check the reflectance. Thanks.

    • jolyon
      June 22
      Reply

      Hi Andy, Glad you’ve got your camera working! The PTFE is nearly white, while very few other things in nature are that white. As a result the camera will generally be trying to expose for the majority of the image rather than the brightest thing in it. So you could simply switch to manual exposure control (“M” mode) if going right down to -3 stops in “A” mode doesn’t work. Buying a grey (e.g. 10-50%) reflectance standard would be one solution if you want to stick with automatic exposure control (but generally very expensive). Another solution is to have more of the white PTFE in the background, this will help the camera’s automatic exposure control.

  30. Pedro
    July 11
    Reply

    Hi Jolyon! Such an amazing approach to measure and interpret animal coloration, I have been exploring the methods as suggested in both the paper and the toolbox manual. I have generated visual models successfully (with help from Luis Robledo) and the software seems to be working properly so far. Recently I have been trying to compare objective coloration without modelling for any visual system since I just want to obtain reflectance values from different species of damselflies and compare them, the problem I am having is that for the uvR channel, and in very rare cases the uvB channel, I get negative values. Is there a chance that this has to do with the measured material (color traslucid wings) in relation to the standard? for example, these negative values are more frequent when measuring really dark objects. My photos don’t seem to be overexposed or underexposed by the way. How should I interpret this data? or should I transform it first before preceding to analyse?. Thanks in advance.

    • jolyon
      July 13
      Reply

      Hi Pedro,

      See the FAQs above on negative numbers. These happen when the dark point isn’t correct, either because the reflectance of your dark standard is actually higher than the value you entered, or because the automatic dark point estimation is going wrong. One simple solution is to only use one standard and no dark point correction (only do this in conditions where lens flare/loss of contrast isn’t an issue). Alternatively, you could find your minimum value and add this to all of your measurements (across all channels). e.g. if your lowest measurement is -100 in vR, then add 100 to all of your channels. This is obviously a last resort though as it will subtly alter the colour ratios. Another likely cause is that the dark standard is receiving slightly more light than the thing you’re measuring.

  31. Stuart Pointon
    July 18
    Reply

    Hi Jolyon. I am after a method to simulate the effect of different light spectral response on an image. For instance if I have a reference image and I attribute an illuminant A (3200K) halogen as the light source at the time of the image capture, I then want to look at the effect of an LED light source on that image by simulation. Getting the spectral response of the light sources is easy. I was wondering if your tool kit could work in this way? Or do you know of an Imagej or other analysis plugin or app? Kind Regards

    • jolyon
      July 20
      Reply

      Hi Stuart,

      Yes you could do this with the toolbox. The caveat being that you’ll need to know your camera’s spectral sensitivities, and take the sample photos under a nice broad-band illuminant (like natural sunlight or a good arc lamp that doesn’t have a very spikey output). Then you can create models for each illuminant you want to model. The output of the toolbox has gone through the Von Kries equation, so the images make the assumption that the animal’s visual system has adapted to that illuminant.

  32. Graeme Awcock
    July 26
    Reply

    In my humble opinion this toolbox is an excellent starting point for practical imaging science investigations using DSLR’s in a wide variety of applications; – thank you very much for making this openly available!
    However, for my own part, I am interested in investigating colour shift due to heat treatment of non-organic materials, and because I lack a very uniform white light source, I would like to add to processing step to the workflow to perform a sort of ‘flat-field correction’ based on a second image file that I would have captured of the ‘white’ reference card filling the field of view. I recognize that this needs to be integrated fully into the workflow if I want to be able to take advantage of your excellent batch processing routines.
    So, am I right in thinking that your plugins are scripted entirely in ImageJ script, which is located as .txt files in the the relevant folder? e.g. Is it correct that the full functionality of the “Generate Multispectral Image” plugin is controlled by the text within the “_Generate_Multispectral_Image.txt” file? So, if I wanted to add in my “flat-field correction” step before normalisation, I *could* do so by editing that text file in the proper way?
    If that is the case, then I feel that I could make a decent attempt at writing that, not least because your script files seems to me to be a very clear example of well disciplined coding in ImageJ script.
    Can you see any reason why I should not be able to achieve my desired outcome?

    • jolyon
      August 7
      Reply

      Thanks so much for your kind comments. Incorporating light-field correction should be easy enough to include, I’ve sent you an email with more info.

      Cheers,
      Jolyon

  33. Sammie
    September 8
    Reply

    Hi Jolyon,

    Unbelievable useful website and tool, thank you!

    Do you know how I could go about getting a sintered ptfe standard? Are they particularly expensive? Are there any alternatives when doing UV photography?

    Thanks.

    • jolyon
      September 13
      Reply

      Hi Sammie. The main suppliers of spectralon standards are labsphere (US & Canada) or Pro-lite in Europe. They tend to cost a slightly crazy amount (about 300-600 pounds each). But watch this space as I plan to start selling some standards ideal for UV/visible photography. Other alternatives are to find suitably white diffuse materials. A compressed pot of barium sulphate is one very cheap example. Wrapping plumber’s PTFE tape around a ~2mm thick piece of raw white PTFE is also a good alternative (just make sure you stretch the tape fully so there are no folds – get it as thin as you can). These will give you a ‘white’ that you don’t know the exact reflectance of (e.g. it might be anywhere from about 90% to 99%), but for many applications the exact value isn’t essential.

  34. Natalia Lifshitz
    September 12
    Reply

    Hi Jolyon,
    First of all, thanks for creating and sharing this. Having a standardized (and cheaper) way of measuring color is great and will save us a lot of time struggling with manual calibrations.
    So for my research, I’m interested in feather color of tree swallows and before I read your paper I took digital photos of the birds in the field. As you mention in your tutorial, it’s very difficult to control for lighting conditions in the field as you cannot choose to work only on sunny days. However, following some previous advice and after reading some related papers (e.g. Stevens et al. 2007), I took the photos in RAW format, with a black background, using a tripod and including the same grey standard in every picture. However, my grey standard is not professional, but a black-to-white scale for commercial paints and apparently the black is not that black and the white is not that white. So my question is: can I somehow calibrate this in ImageJ to make it work? I have access to a spectrometer and could get more accurate measures of the colors in my scale. Would that help?
    Thanks!

    • jolyon
      September 27
      Reply

      Hi Natalia, it should be relatively easy to calibrate your standards. Use your spectrometer to compare your paint samples to a standard of known reflectance (normally the 99% standard for spectrometry). Then calculate the average reflectance in the 400-700nm range for your paint. If there’s more than a ~5% difference in the flatness (e.g. the paint is actually slightly blue or yellow), then things are a little more complicated, but you can still use it. Even if the grey paint is not perfectly spectrally flat (grey), one option is simply to publish the reflectance of the paint, but in the analysis assume it is perfectly grey. This won’t affect any of your data as long as you use the same standard/paint throughout your entire study.

  35. bonewits
    September 30
    Reply

    Your image file on this page will not open

    • jolyon
      October 2
      Reply

      Which image? It all seems to be loading ok for me.

  36. Gavin
    November 7
    Reply

    Hi Jolyon and all,

    Has anyone figured out this error mentioned before:

    Waiting for R to process data…
    Operating system: Mac
    Command: Rscript /Users/gavinmleighton/rScript.r
    Cannot run program “Rscript”: error=2, No such file or directory

    I thought it might be an issue with the PATH but have added the directory, the rScript.R file, and the R application to the path and get the same error.

    Thank you,
    Gavin

    • jolyon
      November 8
      Reply

      UPDATE: This issue has (hopefully) been fixed in version 1.2

      Hi Gavin,

      I’m not sure why this isn’t working, ImageJ seems unable to send commands. Anyway, for now there is a simple workaround. While the script is ‘waiting for R to process data’, open terminal, then type in the command shown in the ImageJ log window. In your case this would be: Rscript /Users/gavinmleighton/rScript.r

      I’ll try to come up with a more elegant solution!
      Cheers,
      Jolyon

      • Torben
        November 13
        Reply

        Hi Jolyon,

        first, thank you for providing this great software.
        I am working on a mac too and have the same issue as Gavin.
        Unfortunately the workaround through the terminal didn’t work either.
        I would be grateful for a solution.

        Thank’s, Torben.

        • jolyon
          November 14
          Reply

          Thanks Torben. Hmm… odd. I’ll try to find a workaround that doesn’t involve using R at all.
          UPDATE: I have hopefully fixed this issue completely in version 1.2 by moving away from R to java libraries.

  37. Pedro
    December 20
    Reply

    Dear Jolyon,

    I want to use the JND function in the toolbox . I want to ask you about the receptor-noise limited model, I am aware that the toolbox is capable of providing JND values but you have to specify the Weber’s fraction, photoreceptor type and noise ratio for each one. The problem is that I don’t have a clear idea of how to obtain the ratios for each photoreceptor, For example, I have a cone proportion of 3:2:2:1: and I would like to know if you can give me any clue on how can I get the proportions from these numbers. I am guessing that this can be achieved using equation (7) in Vorobyev and Osorio 1998, but I’m not sure. I hope this makes sense to you and I appreciate any suggestion or comment you might have.

    Cheers to everybody in the lab.

    Pedro.

  38. Andy
    March 21
    Reply

    Dear Jolyon,
    thanks for a wonderful tool. I wonder if there is a way how to get total luminance values, not separated into vR, vG, vB, uR, uB channels and independent on the visual systems. Something equivalent to the area under the curve when measuring with spectrophotometer. I need some approximation to quantify the amount of melanin in the skin from photos. Would you have any clue? Thanks.
    Andy

    • jolyon
      April 6
      Reply

      Hi Andy,

      Unlike a spectrometer, there is substantial overlap between some of the camera’s channels. E.g. uR and uB have a lot of overlap. So while you can simply sum the three or five camera channels this would under-represent spectral differences in the UV range compared to red (vR) for example. One option would be to make a dummy cone-catch model that has a uniform spectral sensitivity. E.g. make a new receptor type that has a sensitivity of 1 at each nm from 300 to 700nm (see the existing files for the format required), then use the toolbox to make a model for this system. By doing this you’ll end up with a cone-catch model that will control for the spectral overlap in camera sensitivities and also control for the illuminant spectrum. Alternatively, for biological questions it usually makes more sense to model an animal’s achromatic sensitivity for luminance measures.

      Cheers,
      Jolyon

  39. Sofía
    March 31
    Reply

    Hi Jolyon
    I wanna use this software, but I have a problem because I didn’t know this software before to take the photos, that’s why I just have my images in JPG format. So, ¿There is no any the possibility to use the software to analyze my images?

    Thanks,
    Sofia

    • jolyon
      April 3
      Reply

      Hi Sofia,

      JPG images are non-linear, meaning that in order to convert them to objective images they need to be linearised. I have just added a feature to the toolbox that allows you to model the linearity of your JPG photos though, so all is not lost! You’ll need to find a set of grey reflectance standards (e.g. a colour chart with ideally 8 different grey levels of known reflectance from black to white), and take a photo of this under uniform lighting conditions with the longest exposure you can that doesn’t saturate (over-expose) the white standard. Then you need to create a linearity model: go “plugins>Multispectral Imaging>Tools>Model Linearisation Function” Then input the reflectance values and follow the instructions. This will create a linearity model for your camera. It can only be used for your camera though, not any other camera.

  40. Julien R
    April 6
    Reply

    Hi Jolyon,
    Is it possible to extract (e.g., as .txt tables) the pixel values (for each layer of either sRGB or photoreceptor excitation matrices) of a ROI? The toolbox allows calculating the mean and SD of these values for each layer but I would like to compute other statistics, like the volume occupied by the ROI in a colour space. Thanks!

  41. Julien R
    April 6
    Reply

    Hi again Jolyon,
    Have the pixel values of sRGB any significance? For example, the mean pixel value for the R layer of a ROI is 13,203. What does this value mean, and is it bounded? I was expecting a value within the range [0;1] or [0;255]. In advance, thanks for your response.
    Julien

    • jolyon
      April 6
      Reply

      Hi Julien,

      The pixel values are on a 0-65535 range, where 65535=100% reflectance relative to the standard. This is the 16-bit range and is something of a legacy feature that causes some confusion. Feel free to divide the numbers by 655.35 to get a % reflectance scale.

      As for extracting pixel values, there are no tools built in with the toolbox, but imagej can easily extract text images. You’ll need to write a script to do this efficiently. The easiest way would probably be to select an ROI, crop to the ROI, make all pixels outside of the ROI some other value (e.g. -1, or “NaN”), then save the image as a text image. These functions can all be done with standard imagej menu functions that you could record and make a script from pretty easily.

      Let me know if you run into any trouble and I’m sure I can help you out.
      Cheers,
      Jolyon

  42. Great software! I’d definitely be interested if you start marketing the little square gray standards shown in your manual and other sources.
    Here’s a less expensive source for uniform PTFE white standards. (I have no stake in this company, I just needed to spend less than the very high pricing for Spectralon, etc. standards!) Thorlabs in the US sells small PTFE endcaps, part # SM05CP2C for only $28.00 (22 pounds). These are meant to cap integrating spheres, but I measured their reflectance spectrum and they’re quite flat: 99.0% reflectance (stdev 0.05%; full range 0.09% over the visible and near-UV.
    https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_ID=1658

    • jolyon
      April 19
      Reply

      That’s a great call! This will be perfect as a mini white standard – thanks for sharing.

  43. Troubleshooting question about Cone Mapping: I was able to get the Generate_Cone_Mapping_Model macro up and running in Fiji/ImageJ (just installed and updated a fresh version on 4/18/17 after installing Java JDK 8 so everything should be update to date!) and MICA 1.2 (latest version). When I try to run a trial cone mapping using all of your own files over 400-700 nm and the default parameters, it runs but gives me a Macro error message: “Index (1) out of 0–1 range in line 404: modelR2s[k] = coneNames[k] + ” + replace(logString[1 “‘R2: “,”” );
    The debug window shows logString has length zero.
    with a window of Java exception errors starting:
    java.lang.NullPointerException
    at org.scijava.minimaven.MavenProject$XMLHandler.endElement(MavenProject.java:1531)
    I’ve tried Googling creatively to find what’s up, but so far only found suggestions to update, reload Java JDK 8, etc. which I’ve all tried. Any ideas what might be going on here? I’m not using any files that don’t come with MICA.
    Thanks!

  44. Later that day…Troubleshooting question about Cone Mapping problem resolved:
    The issue turned out to be using the Fiji installation instead of just regular ImageJ. When I installed regular ImageJ instead, I was able to get my own files working just fine as well as the ones that came with MICA with the Cone Mapping feature!
    First time I’ve had Fiji not work the same as plain ImageJ. (FYI, Fiji is supposed to be just ImageJ bundled with logs of handy plugins.)
    Perhaps worthwhile letting others know about this odd bug anyway.

    • jolyon
      April 19
      Reply

      Hi Suzanne, Great that you’ve got it working with your files. And yes, the toolbox requires the NIH imagej, not FIJI. I plan to make it cross-compatible, but initially I went for ImageJ because it’s a much smaller download and is much more ‘clean’ than FIJI (which is great, but comes with many features that aren’t quite required).

  45. Javier Diaz
    April 19
    Reply

    Hi Jolyon,
    Thank you very much for your great software, and for your effort to keep it freely available!.
    I have a question: can the colour (for example in RGB, or LHS colour spaces) be measured in the multispectral image, instead of measuring the reflectance values? I need to measure colour in RGB or LHS colour spaces in some ROIs in the photos which I have normalised using a grey standard.
    My images are in JPG format (unfortunately, I found your article after taking my photos), but I have linearised them using a RAW photo of a grayscale with known reflectance values.
    Thank you in advance!

    Javier

    • jolyon
      April 28
      Reply

      Hi Javier,

      The new version of the toolbox has support for non-linear (e.g. JPG) image, which you might find useful! The fits tend to be perfect (R^2 > 0.999).

      You can calculate hue values yourself, but for most statistics it is more convenient to convert to opponent channels instead of circular hue coordinates. e.g. in LAB space using two colour channels: “A” is the red-green channel, and “B” is the blue-yellow channel. I have just added a tool the allows you to convert from XYZ to CIELAB. To use this create a cone-mapping model using the CIEXYZ sensitivities and a relevant camera/illuminant. Then you can use the “XYZ to CIELAB” tool in “plugins>cone mapping>XYZ to CIELAB 32-bit”.

      Cheers,
      Jolyon

  46. Dan
    April 26
    Reply

    Hi Jolyon. I’m trying to get your plugin working in ImageJ 1.50i for Mac OS X 10.11.6 El Capitan. It seems the Mac version of ImageJ doesn’t have DCRAW included. I downloaded the Linux version from source forge, and moved the dcraw folder into ImageJ/plugins. When trying to open a raw image, I get a “Lookup thread terminated with code 126…cannot execute binary file” error. I see in your user guide that sometimes you have to compile your own binary file. So now I’m working through that process, but when trying to execute the llvm-gcc -o dcraw dcraw.c -lm -DNO_JPEG -DNO_LCMS -DNO_JASPER code in Terminal, I get a “invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun” error in Terminal, and no exe file is generated. Any clue where I’m going wrong? Thanks.

    • jolyon
      April 28
      Reply

      Hi Dan,

      I’m afraid I don’t have a mac, so can’t help you debug easily. The error makes it sound like you’re missing (or don’t have the admin rights to access) the compiling software (xcrun)

      Cheers,
      Jolyon

  47. Dan
    April 28
    Reply

    I got it figured out. The problem was I didn’t have permission to run the developer tools needed to recompile dcraw. Once i got that sorted, I continued following the user guide and was able to get dcraw working within ImageJ. Thanks!

    • jolyon
      April 28
      Reply

      Great – thanks for the update!

  48. Sash
    May 1
    Reply

    Hi Jolyjon,
    Thanks for the great tutorial and all the time you have invested! I managed to install the software with your plug-in and all on Mac OS 10.12.4 (Sierra) which was a bit of campaign, but never mind. I have photographed the shell of a fresh water snail, using a Nikon D70 with full spectrum conversion, the famed UV Nikkor, Baader UV/IR filter for VIS and also with Asahi ZRR0340 (340nm) UV pass filter (although I also have Baader U but prefer the Asahi) for UV only. In each picture I had a white Teflon disk normally used to calibrate my radiospecerometer and a black Teflon disk with about 3% reflectance. I have used quantum flash for irradiation which emits a decent quantity of UV. I followed the steps shown in your tutorial when attempting to create the multispectral image.. All I get is a plain white image with a bit of black jitter here or there as it jumps through the different colors(wavelengths). It looks nothing like in your tutorial. Any advice what I might be doing wrong? Happy to send you the RAW’s to see if you can come up with something different. Regards Sash

    • jolyon
      May 1
      Reply

      Hi Sash, sounds like a great setup! The Asahi UV filter looks like it has great transmission though I don’t have any experience with it. I’ve no idea why your images are coming out weird, do they look ok in the photo screening? If so just double-check what standard reflectance values you’re using, and when using both the light and dark standards don’t use the black-point estimation. Feel free to email me a pair of sample RAW images (and msepc file). Cheers, Jolyon

  49. Milagro
    May 21
    Reply

    Great tool! Thanks
    I’ve tried to calculate the JND between two regions on my images, but when I did it, the result is “NaN” for all my images. Why happen this? I’ve tried to measure the colors with differents visual system models: “none”, “bluetit”, “peafowl”, and with no one doesn´t work. I first tried with “none” cause I didn’t take the photos with any camera model with which was created the visual system model included in the tool, but really I don’t understand if that is important, so, I tried with others visual systems, but it didn’t work. Even I’ve tried with Bluetit 0.05, that is the visual system model that I really need, but with that model, even didn´t work for the first measures of colors, so the table was not created to can calculate the JND.
    I would be so grateful if yu can help me with this thing

    • jolyon
      May 23
      Reply

      Hi Milagro. Are you using a UV camera? If you’re not using a calibrated camera then you could select the nearest model available from the tools, but I can’t guarantee the results will be accurate between different cameras. I suggest you watch the video guide for the JND calculations. To use the JND tool you first need to measure the images using the visual system model you’re interested in. Cheers,
      Jolyon

  50. Samuel
    June 20
    Reply

    Hi Jolyon,
    Thank you for making this tool openly available! I am hoping to use it to analyze color and pattern of sea anemones in tide pools, but I can’t afford the Spectralon Standards. I have read a few comments here about PTFE as a white standard, but nothing in the image will have a reflectance that large. Also, it would be best if the standard was waterproof so that it could be set directly beside the anemone. Preferably, I would like to have a waterproof gray standard. Does this even exist?
    Thank you!

    • jolyon
      June 27
      Reply

      Using standards underwater is a little tricky. The standard Spectralon standards do work underwater (i.e. the water doesn’t damage them), however they are extremely hydrophobic, and create a thin air-water barrier between the surface and water. This means the standards aren’t diffuse underwater (this isn’t a huge issue if you’re using diffuse, controlled lighting). As an alternative people have used sand-blasted marine-grade stainless steel, which has a fairly flat reflectance spectrum (though would need checking).

  51. Heidi
    September 7
    Reply

    Being complete newbies, we’re a little confused about what is a digital camera. If it’s not a cellphone camera, or tablet camera, then we should just use the webcam on our laptop? Would you mind giving a link to what you consider to be a digital camera? Thanks so much! ~Heidi

    • jolyon
      October 17
      Reply

      Digital cameras are any cameras that use CCD or CMOS image sensor chips. So everything you listed is a digital camera, whereas old film cameras are not.

  52. Hi Jolyon,
    Easy multispectral filter mounting and quick-change tip: Xume adapter quick-release filter holders for easily changing filters without disturbing camera alignment. These devices are pairs of very lightweight metal rings that screw onto your camera and the filters, then mate together magnetically. They come in a variety of sizes. (Note, I use them but I am not affiliated with the company that makes or sells them.) They are a bit pricey (>= 15 USD per adapter) but I managed to buy a set with one camera ring and several filter rings to get the price down.
    Thanks for adding the JPEG feature. I’m looking forward to using it!
    Suzanne

    • jolyon
      October 17
      Reply

      Thanks for the tip Suzanne! That looks like a great solution to a problem many people have had.

    • Mitch
      January 17
      Reply

      Hi Suzanne,
      I’m trying to find a setup for UV and Vis photography using the Xume magnetic filter holder you suggested. I’m going to take lots of photos, so the speed will make my life much easier! Unfortunately, it looks like the Baader U-Venus filter that people have suggested doesn’t come in sizes compatible with the Xume filter holder. If you’re doing UV photography, I was wondering whether you’ve managed to find a way to combine the Xume filter holder with the Baader lens, or do you use some other lens for UV photos?
      Thanks in advance for your advice!
      -Mitch

      • Hi Mitch,
        For UV filters, check out the products by http://www.uvroptics.com . Their Andrea filter transmission spectra agree well with those I”ve measured in my own lab and they come in 52 mm filter thread size, which is easy to use with the handy magnetic Xume quick-connect adapters. The price is listed as $269.00 USD today. (I always check around to see if used versions of everything are available to keep the cost low.) Hope this helps!
        Suzanne

  53. Hi Jolyon,
    I need a diffuser to use that will also reflect near-UV. In the MICA manual, you say to use metal umbrellas. Are the kind of “silver” difusing umbrellas sold at pro-stores like B&H Photo really metallic and UV reflective? Any specifics on what to watch out for? Thanks for all the careful advice. I’ve been noticing that every time I think about something, you’re already on top of it!
    Suzanne

    • jolyon
      November 9
      Reply

      Hi Suzanne,

      Those metallic umbrella reflectors seem to work quite well. They will probably be aluminium, though I’m not certain. Aluminium does absorb a bit of UV, but most of the reflection is spectral (surface) reflectance, so the underlying colour of the metal is less important. So in short, shiny metallic umbrellas should be fine for diffusing or focussing light. but be careful, because if you do ‘focus’ rather than diffuse the light, some parts of the scene will be much brighter than others (making the grey standard estimates less reliable).

      Cheers,
      Jolyon

  54. Hi Jolyon,
    I’ve been using MICA successfully, but today I tried to use it on my new Surface laptop running Windows 10. I downloaded the latest MICA and the latest ImageJ (not Fiji, though that had the same problem), installed both and got this error when I tried to run General Cone Mapping Model (having installed it as a macro using Plugin/Macros/Install). When I try to run the macro, I get an error message:
    Dialog error in Line 126
    Dialog .addChoice (“Camera”, );
    I haven’t seen this one before. My JAVA installation should be fresh (v 1.8 comes with ImageJ), and everything else was updated right before this. I tried installing the macro, then restarting ImageJ, but it just loses the macro installation. Any advice you might have would be greatly appreciated!
    Thanks,
    Suzanne

    • jolyon
      November 9
      Reply

      Hi Suzanne,

      This issue could be due to where you’ve installed ImageJ. I know Windows is getting more and more fussy about this. Try installing (unzipping) ImageJ to a folder that isn’t write protected by windows. e.g. in Documents or something. Otherwise I’m afraid I don’t have a Windows 10 machine to play with and test at the moment.

      Cheers,
      Jolyon

      • January 25
        Reply

        Indeed, reinstalling in a different folder fixed the problem. Took me awhile to respond. Thanks!

  55. December 7
    Reply

    Hi, Jolyon! Thanks for your toll! Great work! How can i know the spectral sensibility of my camera? Could you indicate something direction where I can achieve this? I know that this process is dificult to do, but i think is that essencial for my questions. For my current work i have been using a Nikon D90 with optik makario UV sp2 400 52d filter. I wanna convert my photos to cone catch and modelling to bird vision.
    Best regards.
    João

    • jolyon
      January 5
      Reply

      You’ll see a brief explanation in our paper and links to other ways of doing this. Essentially you need to use a monochromator or bandpass filters to test the sensitivity of the camera at different wavelengths (recording the radiance using a spectroradiometer).

  56. Robin Beran MD
    December 29
    Reply

    Jolyon,
    I’m pre computer era, and I wondered if the UV(300-4200,EIA, Sony XC Eu50 monochrome CCD camera would give photos at least similar to how waterfowl (ducks/geese) might visualize the environment. Sorry, probably a stupid question but I haven’t been able to find someone to help with your software.
    Awesome work you have done!

    • jolyon
      January 5
      Reply

      I don’t know what output your CCD camera records I’m afraid. But with the right filters you’re likely to be able to get close to the general colours a bird can see. The difficulty is recombining these into a single image.

  57. Pablo
    March 23
    Reply

    Hey!!
    Thanks for this amazing work. I am writing here because I am absolutely frustrated after spending two days trying to run the Multispectral imaging tool. I can use it until I have to upload the images, then it says “IO Error executing system command: ‘-v'”. I am using Mac OS X Yosemite 10.10.5 … do you any suggestion that help to run it?

    Ps. What I want to do is to measure color intensity in amphibian adults.

    Thank you in advance!!!

    • jolyon
      March 26
      Reply

      Hi, You’ll need to compile your own DCRAW – there are instructions in the guide. I hope this solves your problem!

  58. Another grayscale query. I’m game for making my own, but I’d appreciate some guidance on what materials to use to make a spectrally-flat gray. Above you mention using sand-blasted metal. Any advice on best practices for doing this to achieve different reflectances? Also, do you know how spectrally-flat Munsell Neutral gray paint is in the near UV range? How about artist paints that use titanium dioxide sphere + carbon black? I promise to get back to everyone once we build these with our results!

  59. OK here is a lower (but not low) priced option for spectrally flat grey standards for use in multispectral imaging. Avian Technologies make a 4-step and 8-step grayscale consisting of 1.25 cm square PTFE + carbon standards (very much like a standard color checker chart grayscale.) Their MNA-FSS04-c 4-step scale is $1100 and their MNA-FSS08-c 8-step scale is $1500, which is the best deal I’ve found for PTFE so far. (After looking around, I learned that titanium dioxode and other common bases for white paint are–sunscreen ingredients, so not a great foundation for UV-imaging!)

    • jolyon
      May 23
      Reply

      Hi Suzanne,

      If you’re happy for your standards to remain as a powder I think you can use barium sulphate (a common base of white paint – very cheap, but you need quite pure stuff), then mix this with carbon in different proportions. This will be spectrally flat and very diffuse. You can squash the surface flat too. Obviously this is a lab-only option! There are other spectrally flat powders too (e.g. even sodium bicarbonate from your kitchen cupboard). Using stainless steel is sometimes done underwater (PTFE is hydrophobic and creates an air pocket around the surface, which interferes with its diffuseness), but there’s no way I know of to change the reflectance from some intermediate grey. Often it’s only necessary to use one standard. You could also make a “black” manually (it’s never possible to get pure black, but for most purposes a small opening in a black box will be nice and black.

  60. Here’s a source for finding relatively low cost ($185) UV-pass (300-400nm) that fit into standard 52 mm Xume quick-release adapters and other filter mounts:
    https://www.etsy.com/shop/UVIROPTICS
    Their Lala U and LUV U filters both have spec’ed transmission curves that are rated to pass 300-400 nm. It would be nice if someone could actually measure their transmission spectrum and verify this.
    They are also selling a supercheap $19 PTFE white balance target. No spectra included; again it would be great if someone could calibrate and let us know relative to a Spectralon white standard. (I’ve purchased a variety of filters from this site, but have no affiliation with them!)
    Question: does anyone know of a source for an IR-cut filter that passes both visible and UV? All of those I’ve found are IR-UV cut filters that block below 400 nm. I’d like to block the IR on a full-spectrum modified camera without blocking the UV.

  61. Alfonso Aceves
    July 19
    Reply

    Hi Jolyon,
    In you paper you mention that when testing for linearity, one of the 8 cameras was a Nikon d90 with Nikkor 105. But the Nikon d90 is not included en list of cameras of the toolbox. So, are the spectral sensitivities of the d90 known? I have available a d90 converted to full spectrum and the coastal optics 60mm lens but I haven’t found information to know if I will be able to convert to cone-catch with this set-up.
    Thanks a lot,
    Cheers.
    A

  62. joão
    July 20
    Reply

    Hello, Jolyon! I am having dificuties to finding where to add the spectral sensibility that was calculated for my camera (at 1nm interval). Could you show me the way, please? I already mounted the file according to what was described in the user guide. I also alread tried to insert my spectral response by the way of guide, but i haven’t find the place where a put my sheet results (csv). Im using Windows 10.Thank you very much!

    • jolyon
      August 23
      Reply

      Camera spectral sensitivities go in “ImageJ/plugins/Cone Mapping/cameras”

  63. Natalia Lifshitz
    September 19
    Reply

    Hi Jolyon,
    I have vis-photographs of birds under different light conditions and I need to extract the RGB values of a plumage patch after controlling for illumination (using the 2 grey standards that I included in each photo). However I cannot seem to do this using micaToolbox. Do I need to generate a MSPEC image and then use Analyze>RGBmeasure? If so, the macro won’t allow me to measure RGB from the .mspec I previously created.
    Thanks a lot! Cheers!
    N.

  64. Writing to provide a few updates about illumination sources:
    I measured the emission spectrum of the Exo Terra Sunray 70 watt metal halide lamp that you mention above. Unfortunately, ours is much spikier than the spectrum the manufacturer posted on its website, so this source is not suitable for multispectral imaging. The same was true of a similar reptile lamp, the Zoo Med Power Sun.

    However, this recommendation led to an important discovery! We are able to use the Exo Terra Sunray 70 Watt lamp fixture (without their light bulb) with the the Iwasaki eyeColorArc bulb you recommend! This means we do not have to do our own wiring, get a ballast, etc. This works because the US standard E26 socket works with the EU standard E27 bulbs. We just screw in the eyeColorArc light bulb and use it!

    If anyone wishes to use this idea, do be aware that you need to add a diffuser in front of this kind of bulb. We used translucent PTFE film in front of our bulb and that works well, though we also measure a flatfield imaging using a diffuse white reflector to allow for correction of any remaining nonuniformity in the illumination.

    We also measured our emission spectrum from an Iwasaki eyeColorArc lamp purchased this summer, and found that the fully warmed up bulb gave us a spectrum very similar to that included with MICA, even though we had not removed any filtering from the glass envelope. In our spectrum and the one included with MICA, there is not much UV below 350 nm, but for our applications this does not matter. It’s good to be aware of this, though, since for some applications that’s relevant.

  65. Jon
    November 20
    Reply

    Dear Jolyon,

    Thanks so much for this program, it is an indispensable tool in my research on parrot color evolution!

    I am trying to create a cone mapping model and the Java file is generating but the class file is not. I’ve tried changing the camera filename and it still doesn’t work.
    The java writer I think is failing and the .java file isn’t correctly formatted. It just shows instead of the R^2 values a bunch of lines that just say float[] 0 which throws a whole bunch of errors down the line. Furthermore the regression functions are just filled in with [i] and the values aren’t filled in. This happens across multiple operating systems across multiple computers.

    Please let me know if you have any possible solutions.
    Thanks!

    • jolyon
      November 22
      Reply

      Hi Jon,

      Sorry for the bug – try rolling back to ImageJ version 1.49 (go help>update> v1.49). There are some bugs introduced with the latest version of ImageJ which I’m soon to fix in a BIG update! For now this should fix the problem.

      Cheers,
      Jolyon

  66. anita
    December 3
    Reply

    Dear Jolyon, thank you for the amazing program!
    I am a PhD student and I have recently started with the quantitative analysis of color in amphibians. I would like to turn my RGB values into hue, chroma and brigthness via LAB. All the converters I found ( ex. http://www.brucelindbloom.com/index.html?Math.html) take for granted that the RGB values are within a range of 0 to 255. However, my RGB values are not included in this range (ex Rmax is 18341; the Gmax is 20163 and the Bmax is 10325). I have to replace the maximum value of 255 with the RGB maximum of my data or what? Do you have any suggestion?Thanks

    • jolyon
      January 7
      Reply

      The values in the current toolbox are on a 16-bit range (i.e. 0 to 65535). Divide these numbers by 655.35 to get percentage if you want that, or divide by 255 if you want an 8-bit range.

  67. Andy
    March 21
    Reply

    Dear Jolyon, I tried to calculate colour JND differences from a VIS image with Peafowl 400-700 model, Weber 0.05, but I always got NaN results. Then I tried to generate a new cone mapping model, using data from Peafowl 300-700, chopping off everything below 400 nm. Now it works for some colours, but I still get NaN when blue is involved. Do you have any suggestions how to solve this? Thanks.

  68. I’m hoping to shoot some multispectral images using a polarizing filter. but I’ve learned that it’s really hard to locate broadband polarizers that work in the UV. I have found a wire grid polarizer at Thorlabs that’s 25 mm diameter (very small for photography), but it does have a pretty flat transmission in the wavelength regime required. However, it costs >$1K. PolarPro has a line of quartz-based camera filters out now, and they sell both polarizing filters and ND filters that promise to be flat in their responses and cost about $100/each, but no spectra are posted. They also have limited filter ring sizes. Does anyone have experience with using polarization in UV or multispectral photography? Does anyone know of other sources or have data for or experience with the Polarpro?

  69. Suzanne Amador Kane
    April 11
    Reply

    More on UV-compatible polarizers thanks to the advice from this article:

    Foster, J. J., Temple, S. E., How, M. J., Daly, I. M., Sharkey, C. R., Wilby, D., & Roberts, N. W. (2018). Polarisation vision: overcoming challenges of working with a property of light we barely see. The Science of Nature, 105(3-4), 27.

    I found a good US source for linear UV-vis polarizers Bolder Vision Optik sells them (BVO-UV) and they will cut to size for about $250
    http://boldervision.com/
    I purchased empty 52 mm filter mounts ($25) from Edmund Scientific, and now I have a UV-compatible linear polarizer. Knight Optical in the UK also sells them for a similar cost.

    One caution is that “polarizing filter” used for photography are usually sold as circular polarizers (that is, they combine a linear polarizer with a quarter-wave plate). This way, the light that exits the polarizing filter is not linearly polarized, so it does not interact in complicated ways with any mirrors, prisms, etc. in the camera. This will not be the case with this linear polarized if used by itself, so watch out for this.
    Should not be a problem if you are using them for UV photography, but it’s worth considering complications if you are using them for quantitative multispectral imaging and your camera includes mirrors, prisms, etc.

  70. Here’s a tip about how to polish off the UV coating from the Iwasaki eyeColorarc bulbs easily, quickly, and safely (Jolyon suggests using a wire brush on a drill, which should work fine, but I don’t feel comfortable doing that!) Instead, I use 3M brand polishing papers, available from many vendors, including Amazon. You can use these dry–no need for liquid or a polishing compound. I began by using the coarsest grade (green) to remove the UV coating by simply polishing it off of the glass bulb by hand. (Before you start, note how the bulb is somewhat iridescent–the thin film of the coating causes colored reflections.) The bulb will look frosted after this first step. Next, I used the finest grade of polishing paper (white) and polished the bulb until it was as optically clear as it was originally–but no longer iridescent. This whole process is very fast (it takes a few minutes tops). The bulbs work fine after this treatment, and they emit far more UV.

  71. Andre
    September 16
    Reply

    Hi Jolyon
    I’m encountering an issue with the photoscreening tool in the old mica toolbox. -the histogram in the photoscreening tool suggests my images are correctly / under exposed, but when I check them on RawTherapee (under a neutral profile) they are definitely overexposed.

    Do you have any idea what may be causing this issue? I’ve tried re-installing imagej, DCraw changing versions etc. to no avail. I am using a windows 64 bit machine.

    I have also tried using the new mica toolbox, but the photoscreening tool here also does not work. In this case, the settings box for visual or visual & UV does not open any options, and when pressing ok to continue I get an error to do with string and settings.

    Any help would be greatly appreciated

    Thanks in advance

    • jolyon
      January 10
      Reply

      Please try out the new toolbox here http://www.empiricalimaging.com

      Let me know if you have any issues with that. You can trust the toolbox’s exposure metering more than any other software, so if it’s fine with the micaToolbox then it’ll be fine to use.

  72. hang zhang
    December 8
    Reply

    Hi JolyonI
    I getting in trouble with “energy” when i’m reading use guide.
    the “energy” at each scale is measured as the standard deviation of the filtered pixel values in this guide,But the difference is that” pattern ‘energy’ (e), as the sum of the squared pixel values in each image divided by the number of pixels in the image(Stoddard et.al 2010)”puzzle me .

    • jolyon
      December 10
      Reply

      Yeah you can use variance instead of standard deviation (though previous methods haven’t controlled for area!)

Leave a Reply to hang zhang Cancel reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.