Virtual (at Valencia)
Spanish & Portuguese
Advanced Optical Microscopy Meeting 2020
November 24th to 27th, 2020
In this section of the workshop, we will discuss two distinct and complementary approaches to observe biological samples: high content microscopy and light sheet microscopy, and highlight the pros and cons of each. We will touch upon the biologically relevant information that can be obtained in each technique, thus providing a set of guidelines to choose the experimental approach best suited for the sample of interest.
In this section we will use CellProfiler to batch analyze a 3D imaging dataset. We will introduce the recently released CellProfiler 4 and present the major changes in this version. Then, we will design a custom analysis pipeline to segment objects, extract quantitative features and export a results table. We will also show how the analysis pipeline can be customized to address advanced analytical requirements and allow annotation with experimental metadata.
In this part, we will see how to process the data obtained from the image analysis. We will work with Excel to get a fast visualization to start to interpret our results, and we will work with Orange to approach the Machine Learning techniques to analyze our data.
Optical Projection Tomography (OPT) is a technique still vastly unexplored in biomedical research. OPT has been instrumental for establishing high-quality virtual atlases for several vertebrate embryos or for expediting efforts such as the International Mouse Phenotyping Consortium to characterize thousands of mouse phenotypes. Here we will show a simplified version of the OPenT, simple/inexpensive to build and operate and capable of producing isometric macro & mesoscopic level 3D datasets of high-quality, anatomical detail and visual impact. The OPenT can be built with basic knowledge on electronics and optics, and ImageJ operation. OPT also allows 3D imaging or non-fluorescent samples and does not require stitching and fusion of different views, thus saving enormous time and computational/data handling resources. In this demonstration we show the prototype, how to build it, discuss advantages/challenges, considerations about sample preparation and introduce a new ImageJ script and workflow for pre-processing OPT datasets.
Combining and multiplexing microscopy approaches is crucial to understand cellular events, but requires elaborate workflows. We developed a robust, open-source approach for treating, labelling and imaging live or fixed cells in automated sequences. NanoJ-Fluidics is based on low-cost Lego hardware controlled by ImageJ-based software, making high-content, multimodal imaging easy to implement on any microscope with high reproducibility. We will demonstrate how easy it is to build a pump set and how you can use it in your experiments.
The ABBE Platform of the Champalimaud Foundation developed an innovative imaging chamber for drug testing in long term in vivo imaging using the Zeiss lightsheet system. The chamber is optically transparent, biocompatible and of small inner volume (~3 mL), which improves the efficiency of drug testing studies. During this workshop we will show the technical characteristics of the chamber, how to make it and how to use it from its assembling to image acquisition.
Whole organism imaging is pursued by life scientists to study cells in their natural “in tissue” context, but the challenges associated with in toto imaging of centimeters-sized organs, combined with consequent instrumentation costs, somehow still impair global access to such analysis. Here, we propose an affordable lightsheet device and imaging framework, LEMOLISH (LEGO-based Motorized Lightsheet microscope) aimed to enable anyone to equip the lab with a easy-to-mount benchtop solution, and to enter the game of 3D imaging of very large scale cleared samples, for a starting cost standing below immunostaining costs, and 100- to 200-fold below commercial or highly customized systems. We will present the features of the instrument in a live hands-on demo from the lab to offer participants a close-up view on its building blocks and acquisition strategy.
(from November 2020: https://legolish.org/lemolish/)
Two designs of Lightfield Microscope are shown. The conventional Lightfield Microscope captures at each shot an array of thousands microimages, from which the perspective views are computed. On the contrary, the Fourier Lightfield Microscope captures directly, at each shot, the collection of orthographic perspective views. The two modes are compared in terms of resolution, depth of field and computation time. Besides, practical hints on how to setup the two microscopes are given.
In direct streaming from the 3DID Lab., University of Valencia, it is shown how to setup the two modes of lightfield microscope. The microscopes are implemented in open architecture on an optical table. The importance of the fine adjustment of some parameters, like the illumination or the microlenses conjugation, are shown. Then, some imaging experiments are performed in real time, and the resulting 3D images are shown and compared.
Deep learning, the latest extension of machine learning, has pushed the accuracy of algorithms to unseen limits, especially for perceptual problems such as the ones tackled by computer vision and image analysis. This workshop will cover the foundations of the field, the communities organized around it, and some important tools and resources to get started with these techniques. Two successful deep learning bioimage analysis platforms will then be presented, with hands-on examples that will cover how to set them up for daily use. No prior programming knowledge is required to follow the workshop.
Deep Learning (DL) methods are powerful analysis tools for microscopy. However, the need to access computational resources and the complexity in setting these up often lead to an accessibility barrier for most biology-focused laboratories.Here, we present ZeroCostDL4Mic, a DL platform which considerably simplifies access and use of DL for microscopy. For this, we exploit the computational resources provided by Google Colab: a free, cloud-based service accessible through a web browser. ZeroCostDL4Mic allows researchers without coding expertise to use some of the most powerful DL networks available today, for e.g. segmentation, denoising, artificial labelling, super-resolution microscopy, object detection and image-to-image translation. Importantly the platform allows the user to perform every step of the process necessary to DL: training and use of the models, quality control of the network output as well as integration within larger analysis pipelines.
The use of Deep Learning models requires previous programming knowledge and expertise, which makes them unapproachable to the general public. We present DeepImageJ , an open-source project that enables the generic use of pre-trained deep learning models provided by their developers in FIJI/ImageJ. The plugin acts as a software layer between TensorFlow and FIJI/ImageJ with all the technicalities hidden behind a user-friendly interface. In this workshop we show the two main functionalities of DeepImageJ: (1) a model importer tool that gathers all critical information from developers to get a correct image processing, and (2) a user-oriented tool that runs a selected model on an image batch. The plugin can also be called in a standard ImageJ/Fiji macro, which permits its inclusion as a standard plugin in image analysis workflows. This design facilitates the use of DNN models by end-users.