Perceptron: AI mixes concrete, designs molecules and thinks with space lasers

- Advertisement -


Welcome to Perceptron, TechCrunch’s weekly roundup of AI news and research from around the world. Machine learning is now a key technology in almost every industry, and there is too much going on for anyone to keep track of everything. This column aims to collect some of the most interesting recent discoveries and articles in the field of artificial intelligence and explain why they are important.

- Advertisement -

(formerly known as Deep Science; check out previous releases here.)

- Advertisement -

This week’s review starts with a pair of forward looking research from Facebook/Meta. The first is a collaboration with the University of Illinois at Urbana-Champaign to reduce emissions from concrete production. Concrete accounts for about 8 percent of carbon emissions, so even a small improvement can help us meet our climate goals.

This is called “droop testing”.

- Advertisement -

What the Meta/UIUC team did was train the model with over a thousand concrete formulas that varied in proportions of sand, slag, frosted glass, and other materials (you can see a sample of more photogenic concrete above). By discovering subtle trends in this dataset, he was able to deduce a number of new formulas optimizing both strength and low emissions. Formula for victory turned out to be 40 percent less emissions than the regional standard, and met … well, some strength requirements. This is very promising, and further research in this area should change things again soon.

The second meta-study involved changing how language models Job. The company wants to work with neuroimaging experts and other researchers to compare language models with actual brain activity when performing similar tasks.

In particular, they are interested in the human ability to anticipate words much earlier than the current one during a conversation or listening – for example, to know that a sentence will end in a certain way, or that there is a “but”. The AI ​​models are getting very good, but they still basically work by adding words one by one like Lego bricks, looking back from time to time to see if it makes sense. They’re just getting started, but they already have some interesting results.

Returning to the question of materials, researchers at Oak Ridge National Laboratory are starting to play around with the formulation of AI. Using a data set of quantum chemistry calculations, whatever they were, the team created a neural network that could predict the properties of materials, but then inverted it so they could enter properties and suggest materials.

“Instead of taking a material and predicting its intended properties, we wanted to pick the ideal properties for our target and work backwards to quickly and efficiently develop those properties with a high degree of confidence. It’s called inverse design,” said ORNL’s Victor Fang. It seems to have worked, but you can check for yourself by running code on github.

View of the upper half of South America as a canopy height map.

Image credits: ETZ

Interested in physical predictions on a completely different scale, this ETHZ project estimates the height of tree canopies around the world using data from ESA’s Copernicus Sentinel-2 satellites (for optical imaging) and NASA’s GEDI (orbital laser ranging). Combining these two methods in a convolutional neural network gives an accurate global tree height map up to 55 meters.

The ability to conduct this kind of regular biomass survey on a global scale is important for climate monitoring, as NASA’s Ralph Dubai explains: “We just don’t know how tall the trees are in the world. We need good global tree location maps. Because whenever we cut down trees, we put carbon into the atmosphere and we don’t know how much carbon we put out.”

You can easily view data in map view here.

This DARPA project also applies to landscapes and is dedicated to creating extremely large-scale simulated environments through which virtual autonomous vehicles can navigate. They signed a contract with Intelalthough they could save some money by contacting the game’s creators Snowmanwhich basically does what DARPA wants for $30.

Images of a simulated desert and a real desert side by side.

Image credits: Intel

The goal of RACER-Sim is to develop off-road vehicles that already know what it’s like to race through rocky desert and other harsh terrain. The four-year program will focus first on creating environments, building models in a simulator, and then transferring skills to physical robotic systems.

In the field of AI pharmaceuticals, which now has about 500 different companies, MIT has a smart approach in a model that suggests only those molecules that can actually be created. “Models often involve new molecular structures that are difficult or impossible to reproduce in the lab. If a chemist cannot create a molecule, its disease-fighting properties cannot be tested.”

Looks cool, but can it be done without unicorn horn powder?

The MIT model “ensures that molecules are made of purchasable materials and that the chemical reactions that take place between these materials follow the laws of chemistry.” It looks like what does Molecule.one dobut integrated into the discovery process. Of course, it would be nice to know that the miracle cure your AI offers doesn’t require fairy dust or other exotic substances.

Another piece of work from MIT, the University of Washington, and others is teaching robots how to interact with everyday objects—we all hope this will become commonplace in the next couple of decades, as some of us don’t own dishwashers. The problem is that it’s very hard to tell exactly how people interact with objects because we can’t pass our data with high precision to train the model. Thus, a lot of data annotation and manual labeling is required.

New technology focuses on very close observation and inference of 3D geometry, so a few examples of how a person grasps an object is enough for the system to learn to do this itself. Usually, hundreds of examples or thousands of iterations can be required in a simulator, but here only 10 human demonstrations per object were required to effectively manipulate that object.

Image credits: Massachusetts Institute of Technology

With this minimal preparation, it was possible to achieve an 85 percent success rate, which is much better than the base model. It is currently limited to a few categories, but the researchers hope it can be generalized.

Last this week some promising work from Deepmind on a multi-modal “visual language model” that combines visual knowledge with linguistic knowledge so that ideas such as “three cats sitting on a fence” have a kind of cross-representation between grammar and imagery. After all, this is how our own mind works.

Flamingo, their new “general purpose” model, can perform visual identification as well as dialogue, not because it’s two models in one, but because it combines language and visual understanding. As we have seen in other research organizations, this multimodal approach gives good results, but is still experimental and computationally intensive.


Credit: techcrunch.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox