Wednesday, December 13, 2017

Ce soir Paris Machine Learning Meetup #4 Season 5: K2, Datathon ICU, Scikit-Learn, Multimedia fusion, Private Machine Learning, Drug Design



So today is Paris Machine Learning Meetup #4, Season 5. Wow ! Thanks to Invivoo for sponsoring this meetup (food and drinks afterwards) and especially thanks for give us this awesome place!


The video streaming is here:


Capacity := +/- 170 seats / First-come-first-serve / then doors close

Schedule :

6:45PM doors open / 7-9:00PM talks / 9-10:00PM drinks/foods / 10:00PM end



Gael Varoquaux (INRIA), Some new and cool things in Scikit-Learn

An update on the scikit-learn project: new and ongoing features, code improvements, and ecosystem.

Nhi Tran (Invivoo), Multimedia fusion for information retrieval and classification

“Multimodal information fusion is a core part of various real-world multimedia applications. Image and text are two of the major modalities that are being fused and have been receiving special attention from the multimedia community. This talk focuses on the joint modelling of image and text by learning a common representation space for these two modalities. Such a joint space can be used to address the image/text retrieval and classification applications.”

Morten Dahl (snips), Private Machine Learning

By mixing machine learning with cryptographic tools such as homomorphic encryption we may hope to for instance train model on sensitive data previously out of reach. Although still maturing, in this talk we will look at some of these techniques and how they were applied to a few concrete use cases.
Quentin Perron (Iktos) Artificial intelligence for new drug design

New drug design is a long (5 years), costly (50-100M$) and unproductive process (1% success rate from hit to pre-clinical candidate)… Iktos aims to leverage big data and AI to bring radical improvement to this process. Iktos has invented and is developing a truly innovative and disruptive artificial intelligence technology for ligand-based de novo drug design, focusing on multi parametric optimization (MPO). Our proprietary technology is built upon the latest developments in deep learning algorithms. In a few hours, our technology can design new, druggable and synthesizable molecules, that are optimized to match all your selection criteria.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, December 12, 2017

The Case for Learned Index Structures

Here is a different kind of The Great Convergence: when neural networks go after data structures, (hashes, etc....) and eventually database systems....




Indexes are models: a B-Tree-Index can be seen as a model to map a key to the position of a record within a sorted array, a Hash-Index as a model to map a key to a position of a record within an unsorted array, and a BitMap-Index as a model to indicate if a data record exists or not. In this exploratory research paper, we start from this premise and posit that all existing index structures can be replaced with other types of models, including deep-learning models, which we term learned indexes. The key idea is that a model can learn the sort order or structure of lookup keys and use this signal to effectively predict the position or existence of records. We theoretically analyze under which conditions learned indexes outperform traditional index structures and describe the main challenges in designing learned index structures. Our initial results show, that by using neural nets we are able to outperform cache-optimized B-Trees by up to 70% in speed while saving an order-of-magnitude in memory over several real-world data sets. More importantly though, we believe that the idea of replacing core components of a data management system through learned models has far reaching implications for future systems designs and that this work just provides a glimpse of what might be possible.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, December 11, 2017

Compressive 3D ultrasound imaging using a single sensor

Pieter just sent me the following:

Dear Igor,
I have been following your blog for a couple of years now as it served as an excellent introduction to the field of CS and an active source of inspiration for new ideas. Many thanks for that! It was a quite a journey, but finally we managed to get some form of CS working in the field of ultrasound imaging. In our paper (online today:http://advances.sciencemag.org/content/3/12/e1701423, and a short video about this work: https://www.youtube.com/watch?v=whbbaF1nT4A ) we show that 3D ultrasound imaging can be done using only one sensor and a simple coding mask. Unfortunately we do not show any phase transition map and there is not much exploitation of sparsity but it does show that hardware prototyping and the utilisation of signal structure in conjunction with linear algebra can reveal powerful, new ways of imaging.
It would mean a lot to me (a long-held dream) if you could mention our paper on your blog some time.


Kind regards,
Pieter Kruizinga
Awesome Pieter !



Compressive 3D ultrasound imaging using a single sensor by Pieter Kruizinga, Pim van der Meulen, Andrejs Fedjajevs, Frits Mastik, Geert Springeling, Nico de Jong, Johannes G. Bosch and Geert Leus

Three-dimensional ultrasound is a powerful imaging technique, but it requires thousands of sensors and complex hardware. Very recently, the discovery of compressive sensing has shown that the signal structure can be exploited to reduce the burden posed by traditional sensing requirements. In this spirit, we have designed a simple ultrasound imaging device that can perform three-dimensional imaging using just a single ultrasound sensor. Our device makes a compressed measurement of the spatial ultrasound field using a plastic aperture mask placed in front of the ultrasound sensor. The aperture mask ensures that every pixel in the image is uniquely identifiable in the compressed measurement. We demonstrate that this device can successfully image two structured objects placed in water. The need for just one sensor instead of thousands paves the way for cheaper, faster, simpler, and smaller sensing devices and possible new clinical applications. 




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Monday, December 04, 2017

Nuit Blanche in Review (October and November 2017)

It's been two months since the last Nuit Blanche in Review (September 2017). We've had two Paris Machine Learning meetups, a two-day meeting of France is AI. Nuit Blanche featured two theses, a few job postings. While NIPS 2017 is about to start. I also recall last year's NIPS in Barcelona, where there was a sense that the community would move in on other areas besides computer vision. From some general takeaways from #NIPS2016
  • With the astounding success of Deep Learning algorithms, other communities of science have essentially yielded to these tools in a manner of two or three years. I felt that the main question at the meeting was: which field would be next ? Since the Machine Learning/Deep Learning community was able to elevate itself thanks to high quality datasets such as MNIST all the way to Imagenet, it is only fair to see where this is going with the release of a few datasets during the conference including the Universe from OpenAI. Control systems and simulators (forward problems in science) seem the next target.
Well, if you take a look at the few papers of this past two months mentioned here on Nuit Blanche, it looks like GANs and other methods have essentially made their way into the building of recovery solvers: i.e. algorithms dedicated to build images/data back from measurements. The recent interest in the development of Deep Learning for physics  makes it likely we will soon build better sensing hardware. 

Another interesting item to us at LightOn this past month is the realization that Biologically Inspired Random Projections is a thing. 

Enjoy the postings.


Implementation

In-depth
Hardware
Thesis
Meetup
Videos and slides:
CfP
Job:


credit: NASA / JPL / Ricardo Nunes

Friday, December 01, 2017

Job: Faculty position, ECE, Ohio State

Phil just sent me the following:

Hi Igor 
....
I wanted to let you know that our department (ECE at Ohio State) has an open faculty position covering the areas of image processing, MRI/fMRI, brain imaging, and neuroscience. The official job posting can be found at https://ece.osu.edu/about/employment
I'd be grateful if you considered posting this on your excellent Nuit Blanche blog.
Thanks,
Phil
--
Phil Schniter
Professor, The Ohio State University
 http://www.ece.osu.edu/~schniter
Sure Phil !


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, November 30, 2017

Deep Generative Adversarial Networks for Compressed Sensing Automates MRI - implementation - / Recurrent Generative Adversarial Networks for Proximal Learning and Automated Compressive Image Recovery

Morteza just sent me the following awesome use of GANs that is eerily close to the dichotomy between the analysis and the synthesis approach in compressive sensing (I look forward to the use of GANs in learning field equations). Here is how he describes his recent work:


Hi Igor,

I would like to share with you and the Nuit Blanche reader our recent series of work on using generative models for compressed sensing.

We initially started using deep GANs for retrieving diagnostic quality MR images. Our observations in https://arxiv.org/abs/1706.00051 are quite promising!! The discriminator network can play the role of a radiologist to score the perceptual quality of retrieved MR imges.

In order to reduce the train and test overhead for real-time applications, we then designed a recurrent generative network that unrolls the proximal gradient iterations. We use ResNets and the results are really interesting!! A simple single residual block repeated for a few times can accurately learn the proximal, and outperform the conventional CS-Wavelet by around 4dB. The results are reported in https://arxiv.org/pdf/1711.10046.pdf.

It would be great if you could share this news with your readers!!

Thanks,
Morteza
Thanks Morteza !
 
 
Deep Generative Adversarial Networks for Compressed Sensing Automates MRI by Morteza Mardani, Enhao Gong, Joseph Y. Cheng, Shreyas Vasanawala, Greg Zaharchuk, Marcus Alley, Neil Thakur, Song Han, William Dally, John M. Pauly, Lei Xing
Magnetic resonance image (MRI) reconstruction is a severely ill-posed linear inverse task demanding time and resource intensive computations that can substantially trade off {\it accuracy} for {\it speed} in real-time imaging. In addition, state-of-the-art compressed sensing (CS) analytics are not cognizant of the image {\it diagnostic quality}. To cope with these challenges we put forth a novel CS framework that permeates benefits from generative adversarial networks (GAN) to train a (low-dimensional) manifold of diagnostic-quality MR images from historical patients. Leveraging a mixture of least-squares (LS) GANs and pixel-wise 1 cost, a deep residual network with skip connections is trained as the generator that learns to remove the {\it aliasing} artifacts by projecting onto the manifold. LSGAN learns the texture details, while 1 controls the high-frequency noise. A multilayer convolutional neural network is then jointly trained based on diagnostic quality images to discriminate the projection quality. The test phase performs feed-forward propagation over the generator network that demands a very low computational overhead. Extensive evaluations are performed on a large contrast-enhanced MR dataset of pediatric patients. In particular, images rated based on expert radiologists corroborate that GANCS retrieves high contrast images with detailed texture relative to conventional CS, and pixel-wise schemes. In addition, it offers reconstruction under a few milliseconds, two orders of magnitude faster than state-of-the-art CS-MRI schemes.
 An implementation is here.

Recurrent Generative Adversarial Networks for Proximal Learning and Automated Compressive Image Recovery by Morteza Mardani, Hatef Monajemi, Vardan Papyan, Shreyas Vasanawala, David Donoho, John Pauly

Recovering images from undersampled linear measurements typically leads to an ill-posed linear inverse problem, that asks for proper statistical priors. Building effective priors is however challenged by the low train and test overhead dictated by real-time tasks; and the need for retrieving visually "plausible" and physically "feasible" images with minimal hallucination. To cope with these challenges, we design a cascaded network architecture that unrolls the proximal gradient iterations by permeating benefits from generative residual networks (ResNet) to modeling the proximal operator. A mixture of pixel-wise and perceptual costs is then deployed to train proximals. The overall architecture resembles back-and-forth projection onto the intersection of feasible and plausible images. Extensive computational experiments are examined for a global task of reconstructing MR images of pediatric patients, and a more local task of superresolving CelebA faces, that are insightful to design efficient architectures. Our observations indicate that for MRI reconstruction, a recurrent ResNet with a single residual block effectively learns the proximal. This simple architecture appears to significantly outperform the alternative deep ResNet architecture by 2dB SNR, and the conventional compressed-sensing MRI by 4dB SNR with 100x faster inference. For image superresolution, our preliminary results indicate that modeling the denoising proximal demands deep ResNets.
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly