Google Deepmind – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Thu, 03 Oct 2024 08:49:37 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2022/12/cropped-DC-logo-emblem_multicolor-32x32.png Google Deepmind – Dataconomy https://dataconomy.ru 32 32 Google is chasing OpenAI’s ‘reasoning’ model with groundbreaking tech https://dataconomy.ru/2024/10/03/google-is-chasing-openais-reasoning-model-with-groundbreaking-tech/ Thu, 03 Oct 2024 08:49:37 +0000 https://dataconomy.ru/?p=58754 As Google attempts to compete with OpenAI, it’s stepping up its game by focusing on artificial intelligence (AI) that mimics human reasoning. After OpenAI released its new “o1” model in September 2024, Google has accelerated efforts to enhance its AI models, working on software designed to solve complex, multistep tasks in mathematics and programming. This […]]]>

As Google attempts to compete with OpenAI, it’s stepping up its game by focusing on artificial intelligence (AI) that mimics human reasoning. After OpenAI released its new “o1” model in September 2024, Google has accelerated efforts to enhance its AI models, working on software designed to solve complex, multistep tasks in mathematics and programming.

This race has both companies using “chain of thought prompting,” a technique where AI takes additional time to process multiple related prompts before giving an answer, improving accuracy and reasoning abilities.

As proof of that, Google has been testing new tools, like Alpha Proof and Alpha Geometry, AI models to solve problems via mathematical reasoning and geometry. Its highly anticipated Gemini AI assistant is also being upgraded with better reasoning skills, moving it closer toward OpenAI’s o1.

Google is chasing OpenAI’s reasoning model with groundbreaking tech
Google has been testing AI tools like Alpha Proof and Alpha Geometry for mathematical and geometric problem-solving (Image credit)

The increasing focus on creating AI that can not just chat but also solve real-world problems with greater precision and match the speed of human thought is gauged by the competition between the two.

Early signs have already alarmed Google DeepMind, particularly since OpenAI’s o1 model, launched via ChatGPT in September, already looks to be beating Google’s DeepMind in AI reasoning capabilities. However, Google has a range of AI offerings, including a faster, more efficient model called 1.5 Flash, and its Alpha series, which solves complex math problems in international competitions.

Google is betting that its deep research and carefully launched products will keep it competitive in the AI race as OpenAI introduces new science that leads the way with advances in reasoning.


Featured image credit: Google DeepMind/Unsplash

]]>
Google DeepMind embarked on a breathtaking journey through the labyrinths of our brains https://dataconomy.ru/2024/05/13/google-deepmind-brain-images/ Mon, 13 May 2024 09:16:26 +0000 https://dataconomy.ru/?p=51949 The human brain, the three-pound conductor of our thoughts, emotions, and actions, has long been shrouded in mystery. Scientists have tirelessly endeavored to unravel its intricate workings, but its sheer complexity has presented a formidable challenge. However, recent advancements in artificial intelligence (AI) are offering a powerful new lens through which we can observe this […]]]>

The human brain, the three-pound conductor of our thoughts, emotions, and actions, has long been shrouded in mystery. Scientists have tirelessly endeavored to unravel its intricate workings, but its sheer complexity has presented a formidable challenge.

However, recent advancements in artificial intelligence (AI) are offering a powerful new lens through which we can observe this remarkable organ.

In a groundbreaking collaboration between Google researchers and Harvard neuroscientists, AI has been instrumental in generating a series of incredibly detailed images of the human brain. These images provide unprecedented views into the brain’s structure, offering a glimpse into the labyrinthine network of neurons that underlies our very existence.

A million gigabytes for a millionth of a brain

Imagine peering into a universe contained within a universe. This analogy aptly describes the challenge of studying the human brain. Its structure is mind-bogglingly intricate, composed of billions of neurons interconnected by trillions of synapses. To gain a deeper understanding, researchers require incredibly detailed information.

The research team used advanced AI tools to analyze a tiny sample of human brain tissue, specifically a section of the cortex from the anterior temporal lobe. This sample, though minuscule – representing only about one-millionth of the entire brain – contained a staggering amount of information. Astonishingly, processing this data required a mind-numbing 1.4 petabytes of storage, which is equivalent to over a million gigabytes.

The AI processed this data to construct a high-resolution, three-dimensional model of the brain tissue. This model allows scientists to virtually navigate the intricate folds and layers of the brain, examining individual neurons and their connections in unparalleled detail.

Our cortex is a six-layered marvel

The outermost layer of the brain, the cortex, is responsible for our most complex cognitive functions, including language, memory, and sensory perception. This region is further divided into six distinct layers, each with a specialized role.

One of the most remarkable images generated by Google’s AI offers a zoomed-out view of all the neurons within the sample tissue. By coloring the neurons based on their size and type, the image reveals the distinct layering of the cortex. This visualization allows scientists to study how different types of neurons are distributed throughout the cortex and how they might contribute to specific functions.

Further analysis of the individual layers can provide even more granular insights. By zooming in, researchers can examine the intricate connections between neurons within each layer. These connections, known as synapses, are the fundamental units of communication in the brain. Understanding the organization of these connections is crucial for deciphering how information flows through the brain and how different brain regions interact with each other.

Google DeepMind brain images
Image credit: Google Research & Lichtman Lab (Harvard University). Renderings by D. Berger (Harvard University)

Mapping the myriad of neurons

The human brain is estimated to contain roughly 86 billion neurons, each with a unique role to play. These neurons come in a variety of shapes and sizes, and their specific characteristics influence how they transmit information.

Another image generated by Google’s AI provides a detailed census of the different types of neurons present within the sample tissue. By classifying the neurons based on their morphology, the researchers can begin to understand the relative abundance of different neuronal types in this specific brain region. This information can be compared to data from other brain regions to identify potential variations in neuronal composition across different functional areas.

Furthermore, AI can be used to analyze the spatial distribution of these different neuronal types. Are certain types of neurons clustered together, or are they more evenly dispersed throughout the tissue? Understanding these spatial patterns can shed light on how different neuronal populations interact with each other to form functional circuits within the brain.

Zooming into the dendrites and axons

The magic of the brain lies in its ability to transmit information across vast networks of neurons. This communication occurs through specialized structures called dendrites and axons. Dendrites are like tiny antennae that receive signals from other neurons, while axons act as long, slender cables that transmit signals away from the cell body.

One of the most captivating images generated by Google’s AI provides a close-up view of the intricate dance of dendrites and axons within the sample tissue. This image allows researchers to visualize the density of these structures and how they connect with each other. The complex branching patterns of dendrites and the long, winding paths of axons reveal the intricate web of communication that takes place within the brain.

By analyzing the connectivity patterns, scientists can begin to map the functional circuits that underlie specific brain functions. They can identify groups of neurons that are likely to be involved in processing similar types of information and trace the pathways through which information flows from one brain region to another.

Google DeepMind brain images
Image credit: Google Research & Lichtman Lab (Harvard University). Renderings by D. Berger (Harvard University)

A new window into the human mind

The images generated by Google’s AI are a huge step for our ability to study the human brain. The detailed visualizations offer a window into the brain’s intricate structure, providing a wealth of information about the organization of neurons, their connections, and the cellular diversity within specific brain regions.

This newfound ability to explore the brain at such a granular level has the potential to revolutionize our understanding of neurological and psychiatric disorders. We already know AI’s potential for curing Alzheimer’s and by comparing brain tissue samples from healthy individuals to those with various conditions, researchers can identify potential abnormalities in neuronal structure or connectivity.

Furthermore, AI-generated images can be used to study the effects of aging, learning, and experience on the brain. By examining how the brain’s structure changes over time or in response to different stimuli, researchers can gain valuable insights into the mechanisms that underlie these processes.

The potential applications of this technology extend beyond the realm of basic research. The detailed models of brain tissue generated by AI could be used to develop more realistic simulations of brain function. These simulations could be used to test the effects of potential drugs or therapies before they are administered to human patients.

The vast majority of the brain remains uncharted territory, and many fundamental questions about its function continue to baffle scientists. However, these initial images offer a powerful glimpse into the brain’s hidden depths, and they pave the way for a future where we can finally begin to unravel the mysteries of this most complex organ.


Featured image credit: vecstock/Freepik

]]>
Google’s AlphaFold 3 AI system takes on the mystery of molecules https://dataconomy.ru/2024/05/09/google-alphafold-3-ai-drug-discovery/ Thu, 09 May 2024 09:30:33 +0000 https://dataconomy.ru/?p=51847 The fight against diseases has been a constant pursuit in the medical field. From the dawn of medicine, researchers have tirelessly strived to understand the intricate workings of the human body and the microscopic foes that threaten our health. One crucial area of focus has been on medications, those life-saving molecules designed to interact with […]]]>

The fight against diseases has been a constant pursuit in the medical field. From the dawn of medicine, researchers have tirelessly strived to understand the intricate workings of the human body and the microscopic foes that threaten our health. One crucial area of focus has been on medications, those life-saving molecules designed to interact with our biology and combat illnesses. However, efficiently designing these drugs has long been a challenging process, often requiring years of research and testing.

This is where a new tool emerges, armed with the power of artificial intelligence (AI). Google DeepMind, the company’s AI research lab, has introduced AlphaFold 3, a revolutionary molecular prediction model.

So, what exactly is AlphaFold 3 and how does it propose to change the landscape of drug discovery?

AlphaFold 3 observes the dance of molecules in living cells

Imagine billions of tiny machines working together inside every cell of your body. These machines, built from proteins, DNA, and other molecules, orchestrate the complex processes of life. But to truly understand how life works, we need to see how these molecules interact with each other in countless combinations.

In a recent paper by Google, researchers describe how AlphaFold 3 can predict the structure and interactions of all these life molecules with unmatched accuracy. The model significantly improves upon previous methods, particularly in predicting how proteins interact with other molecule types.

AlphaFold 3 builds on the success of its predecessor, AlphaFold 2, which made a breakthrough in protein structure prediction in 2020. While AlphaFold 2 focused on proteins, AlphaFold 3 takes a broader view. It can model a wide range of biomolecules, including DNA, RNA, and small molecules like drugs. This allows scientists to see how these different molecules fit together and interact within a cell.

The model’s capabilities stem from its next-generation architecture and training on a massive dataset encompassing all life’s molecules. At its core lies an improved version of the Evoformer module, the deep learning engine that powered AlphaFold 2. AlphaFold 3 then uses a diffusion network to assemble its predictions, similar to those used in AI image generation. This process starts with a scattered cloud of atoms and gradually refines it into a precise molecular structure.

The model’s ability to predict molecular interactions surpasses existing systems. By analyzing entire molecular complexes as a whole, AlphaFold 3 offers a unique way to unify scientific insights into cellular processes.

How does AlphaFold 3 work?

AlphaFold 3’s ability to predict the structure and interactions of biomolecules lies in its sophisticated architecture and training process. Here’s a breakdown of the technical details:

1. Deep learning architecture: The foundation

AlphaFold 3 relies on a sophisticated deep learning architecture, likely an enhanced version of the Evoformer module used in its predecessor, AlphaFold 2. Deep learning architectures are powerful tools capable of identifying complex patterns within data. In AlphaFold 3’s case, the patterns of interest lie within the amino acid sequences of biomolecules.

2. Processing the blueprint: Input and attention mechanisms

The model likely receives the amino acid sequence of a biomolecule as input. It then employs attention mechanisms to analyze the sequence and identify critical relationships between different amino acids. Attention mechanisms allow the model to focus on specific parts of the sequence that are most relevant for predicting the final structure.

3. Building the molecule: Diffusion networks take over

After processing the input sequence, AlphaFold 3 utilizes a diffusion network to assemble its predictions. Diffusion networks are a type of generative model that progressively refine an initial guess towards a more accurate output. In this context, the initial guess might be a scattered cloud of atoms representing the potential locations of each atom in the biomolecule.


Xaira secures a billion-dollar bet on the future of AI drug discovery


Through a series of steps, the diffusion network iteratively adjusts these positions, guided by the information extracted from the sequence and inherent physical and chemical constraints.

4. Obeying the laws of nature: Physical and chemical constraints

AlphaFold 3 likely incorporates knowledge of physical and chemical constraints during structure prediction. These constraints ensure the predicted structures are realistic and adhere to scientific principles. Examples of such constraints include bond lengths, bond angles, and steric clashes (atoms being too close together).

5. Learning from examples: Training on vast datasets

AlphaFold 3’s impressive accuracy is attributed to its training on a massive dataset of biomolecules. This data likely includes known protein structures determined experimentally using techniques like X-ray crystallography. By analyzing these known structures alongside their corresponding amino acid sequences, AlphaFold 3 learns the intricate relationship between sequence and structure, enabling it to make accurate predictions for unseen biomolecules.

Applications in drug discovery are vast

One of the most exciting applications of AlphaFold 3 lies in drug design. The model can predict how drugs interact with proteins, offering valuable insights into how they might influence human health and disease.

For example, AlphaFold 3 can predict how antibodies bind to specific proteins, a crucial aspect of the immune response and the development of new antibody-based therapies.

Isomorphic Labs, a company specializing in AI-powered drug discovery, is already collaborating with pharmaceutical companies to utilize AlphaFold 3 for real-world drug design challenges. The goal is to develop new life-saving treatments by using AlphaFold 3 to understand new disease targets and refine existing drug development strategies.

Google AlphaFold 3
Application studies for AlphaFold 3 in drug discovery have begun (Image credit)

Making the power accessible

To make AlphaFold 3’s capabilities available to a wider scientific community, Google DeepMind launched AlphaFold Server, a free and user-friendly research tool. This platform allows scientists worldwide to harness the power of AlphaFold 3 for non-commercial research. With just a few clicks, biologists can generate structural models of proteins, DNA, RNA, and other molecules.

AlphaFold Server empowers researchers to formulate new hypotheses and accelerate their work. The platform provides easy access to predictions regardless of a researcher’s computational resources or machine learning expertise. This eliminates the need for expensive and time-consuming experimental methods of protein structure determination.

Sharing responsibly and looking ahead

With each iteration of AlphaFold, Google DeepMind prioritizes responsible development and use of the technology. They collaborate extensively with researchers and safety experts to assess potential risks and ensure the benefits reach the broader scientific community.

AlphaFold Server reflects this commitment by providing free access to a vast database of protein structures and educational resources. Additionally, Google DeepMind is working with partners to equip scientists, particularly in developing regions, with the tools and knowledge to leverage AlphaFold 3 for impactful research.

AlphaFold 3 offers a high-definition view of the biological world, allowing scientists to observe cellular systems in their intricate complexity. This newfound understanding of how molecules interact promises to revolutionize our understanding of biology, pave the way for faster drug discovery, and ultimately lead to advancements in human health and well-being.


Featured image credit: Google

]]>
Google DeepMind’s fact quest: Improving long-form accuracy in LLMs with SAFE https://dataconomy.ru/2024/04/01/google-deepmind-safe-llm-checker/ Mon, 01 Apr 2024 14:51:01 +0000 https://dataconomy.ru/?p=50624 Large language models (LLMs) have demonstrated remarkable abilities – they can chat conversationally, generate creative text formats, and much more. Yet, when asked to provide detailed factual answers to open-ended questions, they still can fall short. LLMs may provide plausible-sounding yet incorrect information, leaving users with the challenge of sorting fact from fiction. Google DeepMind, […]]]>

Large language models (LLMs) have demonstrated remarkable abilities – they can chat conversationally, generate creative text formats, and much more. Yet, when asked to provide detailed factual answers to open-ended questions, they still can fall short. LLMs may provide plausible-sounding yet incorrect information, leaving users with the challenge of sorting fact from fiction.

Google DeepMind, the leading AI research company, is tackling this issue head-on. Their recent paper, “Long-form factuality in large language models” introduces innovations in both how we measure factual accuracy and how we can improve it in LLMs.

LongFact: A benchmark for factual accuracy

DeepMind started by addressing the lack of a robust method for testing long-form factuality. They created LongFact, a dataset of over 2,000 challenging fact-seeking prompts that demand detailed, multi-paragraph responses. These prompts cover a broad array of topics to test the LLM‘s ability to produce factual text in diverse subject areas.

SAFE: Search-augmented factuality evaluation

The next challenge was determining how to accurately evaluate LLM responses. DeepMind developed the Search-Augmented Factuality Evaluator (SAFE). Here’s the clever bit: SAFE itself uses an LLM to make this assessment!

Here’s how it works:

  1. Break it down: SAFE dissects a long-form LLM response into smaller individual factual statements.
  2. Search and verify: For each factual statement, SAFE crafts search queries and sends them to Google Search.
  3. Make the call: SAFE analyzes the search results and compares them to the factual statement, determining if the statement is supported by the online evidence.
Google DeepMind Safe LLM checker
SAFE itself utilizes an large language model to assess responses (Image credit)

F1@K: A new metric for long-form responses

DeepMind also proposed a new way to score long-form factual responses. The traditional F1 score (used for classification tasks) wasn’t designed to handle longer, more complex text. F1@K balances precision (the percentage of provided facts that are correct) against a concept called recall.

Recall takes into account a user’s ideal response length – after all, an LLM could gain high precision by providing a single correct fact, while a detailed answer would get a lower score.

Bigger LLMs, better facts

DeepMind benchmarked a range of large langue models of varying sizes, and their findings aligned with the intuition that larger models tend to demonstrate greater long-form factual accuracy. This can be explained by the fact that larger models are trained on massive datasets of text and code, which imbues them with a richer and more comprehensive understanding of the world.

Imagine an LLM like a student who has studied a vast library of books. The more books the student has read, the more likely they are to have encountered and retained factual information on a wide range of topics. Similarly, a larger LLM with its broader exposure to information is better equipped to generate factually sound text.

In order to perform this measurement, Google DeepMind tested the following models: Gemini, GPT, Claude (versions 3 and 2), and PaLM. The results are as follows:

Google DeepMind Safe LLM checker
DeepMind benchmarked various large language models of different sizes and found that larger models tend to exhibit greater long-form factual accuracy (Image credit)

The takeaway: Cautious optimism

DeepMind’s study shows a promising path toward LLMs that can deliver more reliable factual information. SAFE achieved accuracy levels that exceeded human raters on certain tests.

However, it’s crucial to note the limitations:

  • Search engine dependency: SAFE’s accuracy relies on the quality of search results and the LLM’s ability to interpret them.

  • Non-repeating facts: The F1@K metric assumes an ideal response won’t contain repetitive information.

Despite potential limitations, this work undeniably moves the needle forward in the development of truthful AI systems. As LLMs continue to evolve, their ability to accurately convey facts could have profound impacts on how we use these models to find information and understand complex topics.


Featured image credit: Freepik

]]>
SIMA has the potential to save the humanity by… playing video games? https://dataconomy.ru/2024/03/13/google-deepmind-sima-generalist-ai-agent/ Wed, 13 Mar 2024 17:06:51 +0000 https://dataconomy.ru/?p=49862 Google DeepMind’s recent breakthrough with SIMA (Self-Instructing Multimodal Agent) shines a spotlight on the rapid progress in making generalist AI agents, specifically designed for 3D virtual environments, a reality. This progress carries transformative potential, not just for the gaming industry, but for the way we interact with virtual spaces across a broad spectrum of applications. […]]]>

Google DeepMind’s recent breakthrough with SIMA (Self-Instructing Multimodal Agent) shines a spotlight on the rapid progress in making generalist AI agents, specifically designed for 3D virtual environments, a reality.

This progress carries transformative potential, not just for the gaming industry, but for the way we interact with virtual spaces across a broad spectrum of applications.

With enhanced capabilities in understanding instructions, adapting to new tasks, and reasoning within the constraints of virtual worlds, SIMA-like agents offer the potential to reshape several key areas.

SIMA’s massive success

DeepMind’s latest innovation is SIMA, which stands for Scalable Instructable Multiworld Agent. Unlike previous AI focused on mastering a single game, SIMA is a generalist AI.

SIMA isn’t limited to pixels on the screen. It can process both visual information (what it sees in the game) and natural language instructions (what a human tells it to do). This multimodal learning allows for a more nuanced understanding of the game world.

A generalist AI agent strives to be a versatile tool within virtual environments. Unlike traditional AI focuses on a single task, a generalist AI aims to learn and perform a wide range of actions. This allows it to adapt to new situations and environments it hasn’t been specifically trained for.

SIMA isn’t trained on just one game. DeepMind collaborated with several game developers, exposing SIMA to a variety of titles like No Man’s Sky and Teardown. This diversity strengthens its ability to adapt to new environments.

SIMA doesn’t need to be spoon-fed every rule. By following instructions, it can learn new skills within a game, like navigating a new area, crafting an item, or using in-game menus. This makes it far more versatile than traditional AI agents.

Don’t be fooled by the lack of focus on achieving top scores. While impressive, that’s not the main objective.

SIMA’s true success lies in its ability to understand and act on human instructions within a game environment. This research signifies a HUGE step has been taken to create an AI that can be helpful to us in the real world.

Some of the games where Google DeepMind runs this groundbreaking AI model are:

  • Goat Simulator 3
  • Hydroneer
  • No Man’s Sky
  • Satisfactory
  • Teardown
  • Valheim
  • Wobbly Life

Apart from all these games, the Google DeepMind team also tested SIMA’s capabilities in realistic simulations created by them called: “Research Environments“. These environments, consisting of Construction Lab, Playhouse, ProcTHOR, and WorldLab, simulate many areas where artificial intelligence is considered to be integrated in the near future.

The magic behind SIMA

Multimodal input processing

SIMA utilizes large language models (LLMs), likely based on the Transformer architecture, to process and understand natural language instructions given by a user. LLMs excel at handling sequential data like text, making them well-suited for this task. To make sense of its surroundings, SIMA employs convolutional neural networks (CNNs) to process visual input from the 3D environment.

CNNs are exceptionally good at extracting spatial features and patterns from images or video streams. SIMA likely uses multiple CNNs to create different levels of representation within the visual input for comprehensive understanding.

Self-instruction

One of the key innovations underlying SIMA is its ability to break down complex instructions into a sequence of simpler sub-tasks. This is likely achieved through a combination of natural language processing (to analyze the instructions) and hierarchical reinforcement learning (RL).

Hierarchical RL allows agents to learn complex behaviors by building upon sequences of lower-level actions.

Additionally, SIMA can generate its own training data and targets by observing its actions within the environment and the resulting changes. This self-supervision technique is crucial for enabling continuous learning and adaptation in new environments, giving it flexibility.

Google DeepMind SIMA generalist AI
Google DeepMind’s generalist AI, SIMA, utilizes multiple techniques used in complex machine learning and artificial intelligence algorithms (Image credit)

Zero-shot generalization

SIMA’s impressive ability to perform new tasks without explicit training likely stems from extensive pre-training on a massive dataset of diverse 3D environments and associated instructions. This pre-training allows the model to build a rich internal representation of virtual worlds and common instructions, enabling it to generalize knowledge.

It’s probable that a meta-learning approach is used during pre-training, encouraging SIMA to develop a strategy for “learning how to learn“.

This allows the agent to acquire new skills quickly within unseen environments.

You may learn more about Google DeepMind’s work on generalist AI agent training using games from their research paper.

Learn from games to shine in the real world

Believe it or not, SIMA marks a turning point in the development of AI.

Video games offer the ideal training ground for AI because they are dynamic, self-contained worlds with clear goals, rules, and feedback mechanisms.

Within these virtual spaces, AI agents can experiment, make mistakes, and learn from their successes and failures – all without the risks or limitations of the real world. As SIMA explores more intricate game worlds and its underlying models become more powerful, it develops the ability to adapt, understand instructions, and strategize to achieve goals.

These skills, honed in the safe sandbox of a game, translate into a versatile and capable AI that can potentially navigate the complexities of our real world.

This is just the beginning of what’s possible when AI learns through play.

Actually, the potential of AI to address real-world challenges becomes clear when we examine the prompts used by Google DeepMind in various games.

Google DeepMind SIMA generalist AI
SIMA’s prompt examples and the actions it performs in games are actually the basis of a research for the integration of AI technologies into real life (Image credit)

To give a few examples:

The “Pick up iron ore” prompt in Satisfactory hints at the potential for AI to improve safety in hazardous industries like mining. The Bureau of Labor Statistics reports a distressing rise in fatal mining injuries, with a 21.8% increase from 2020 to 2021. Imagine the lives that could be saved if AI-powered robots, less prone to human error or fatigue, were to handle dangerous mining tasks.

In the survival game Valheim, the “Find water” prompt highlights the power of AI in addressing vital issues like water scarcity. The World Bank reports that about 226 million people in Eastern and Southern Africa did not have access to basic water services, and 381 million people lacked access to basic sanitation services.

Another robot that can carry out water research on the natural water source in the region without any interruption can touch the lives of billions of people.

Although artificial intelligence seems to be identified with image generation and incessant chatbots nowadays, believe us, it is much more than that, and studies like these hold immense potential for a better future for all.


Featured image credit: Freepik.

]]>
Google DeepMind welcomes GraphCast weather forecast AI https://dataconomy.ru/2023/11/15/google-deepmind-graphcast-weather-forecast-ai/ Wed, 15 Nov 2023 12:49:03 +0000 https://dataconomy.ru/?p=44482 Google DeepMind’s latest innovation, GraphCast weather forecast AI, marks a significant advancement in weather prediction technology. The impact of weather is ubiquitous, influencing everything from daily wardrobe choices to energy production, and in extreme cases, creating storms that have profound effects on communities. As global weather patterns become increasingly volatile, the demand for swift and […]]]>

Google DeepMind’s latest innovation, GraphCast weather forecast AI, marks a significant advancement in weather prediction technology. The impact of weather is ubiquitous, influencing everything from daily wardrobe choices to energy production, and in extreme cases, creating storms that have profound effects on communities. As global weather patterns become increasingly volatile, the demand for swift and reliable weather forecasts has escalated.

A recent publication in Science introduces Google DeepMind’s GraphCast, an AI model that sets new standards in medium-range weather forecasting. GraphCast excels in predicting weather conditions up to 10 days in advance, surpassing the accuracy and speed of the established industry standard—the High Resolution Forecast (HRES), developed by the European Centre for Medium-Range Weather Forecasts (ECMWF).

Beyond its remarkable forecasting precision, GraphCast is adept at providing earlier warnings for severe weather events. It boasts of advanced capabilities in predicting cyclone paths, identifying atmospheric rivers that indicate potential flooding, and forecasting extreme temperature events, all of which are crucial for effective disaster preparedness and potentially life-saving interventions.

GraphCast represents a significant stride in applying AI to weather prediction, delivering forecasts that are not only more accurate but also more efficient. This breakthrough is pivotal for informed decision-making across various industries and societies. In a move to democratize AI-powered weather forecasting, Google DeepMind has open-sourced the GraphCast model code, enabling scientists and forecasters worldwide to enhance daily life for billions. Notably, weather agencies like ECMWF are already utilizing GraphCast, conducting live experiments with the model’s forecasts on their platform.

Google DeepMind’s GraphCast tackles the complexity of medium-range weather prediction

Weather forecasting stands as one of humanity’s most enduring and intricate scientific challenges. The ability to make medium-range predictions accurately is crucial for a myriad of sectors, from renewable energy generation to planning large-scale events. However, achieving accuracy and efficiency in these forecasts has always been a formidable task.

Traditionally, weather forecasts have relied on Numerical Weather Prediction (NWP). This method starts with meticulously crafted physics equations, subsequently converted into algorithms for supercomputer processing. While this approach has been a monumental achievement in science and engineering, crafting these equations and algorithms demands extensive expertise, time, and substantial computing resources to yield precise predictions.

Google DeepMind welcomes GraphCast weather forecast AI
Google DeepMind’s latest innovation, GraphCast weather forecast AI, marks a significant advancement in weather prediction technology (Image credit)

Deep learning tech is presenting an alternate route: leveraging data over physical equations to construct a weather forecasting system. Google DeepMind’s GraphCast weather forecast AI harnesses decades of historical weather data, learning the complex causal relationships that dictate the evolution of Earth’s weather. This method provides insights into weather patterns from the present extending into the future.

Notably, GraphCast doesn’t operate in isolation but works in synergy with traditional methods. GraphCast was trained using four decades of weather reanalysis data from ECMWF’s ERA5 dataset. This extensive collection, comprising historical weather observations like satellite imagery, radar, and readings from weather stations, utilizes traditional NWP models to create comprehensive records of global historical weather, filling in gaps where direct observations might be lacking.

How does GraphCast Weather forecast AI work?

The integration of machine learning and Graph Neural Networks (GNNs) in Google DeepMind’s GraphCast weather forecast AI marks a transformative approach in meteorological prediction. This innovative system specializes in processing spatially structured data, an essential factor for accurate weather modeling.

GraphCast weather forecast AI operates at an extraordinary resolution of 0.25 degrees longitude/latitude, translating to a detailed 28km x 28km grid at the equator. This high level of precision covers over a million grid points across the Earth’s surface. At these points, Google DeepMind’s model comprehensively predicts critical Earth-surface variables, including temperature and wind dynamics, alongside six atmospheric factors across 37 altitude levels, such as humidity and temperature variations.

Despite the intense computational demands during its training phase, the GraphCast weather forecast AI emerges as a highly efficient forecasting tool. The AI can complete 10-day weather forecasts in less than a minute when run on a single Google TPU v4 machine. This efficiency is a significant improvement over traditional methods like HRES, which require several hours and a vast array of supercomputers.

Google DeepMind welcomes GraphCast weather forecast AI
Google DeepMind’s GraphCast weather forecast AI now claims to be the world’s most precise system for 10-day global weather forecasting (Image credit)

In a rigorous performance test against the established HRES system, Google DeepMind’s GraphCast weather forecast AI demonstrated superior accuracy in over 90% of 1380 test variables and forecasting periods. The model’s performance is even more striking within the troposphere, the crucial atmospheric layer closest to Earth. Here, GraphCast outperformed HRES on 99.7% of test variables, showcasing its exceptional capability in predicting future weather conditions.

GraphCast Weather forecast AI requires just two data sets to function: the weather state from six hours prior and the current weather conditions. With this information, it accurately forecasts the upcoming six-hour weather scenario. This process can be sequentially extended in 6-hour increments, enabling Google DeepMind’s model to provide state-of-the-art forecasts up to a remarkable 10 days in advance.

Early detection of severe weather with GraphCast

Google DeepMind’s GraphCast weather forecast AI has demonstrated an exceptional ability to identify severe weather events earlier than conventional models, a feature not explicitly trained for. This capability exemplifies how GraphCast could significantly enhance preparedness, potentially saving lives and mitigating the impact of storms and extreme weather on communities.

By integrating a simple cyclone tracker into GraphCast forecasts, the model achieves superior accuracy in predicting cyclone movements compared to the HRES model. Notably, in a live demonstration on the ECMWF website, GraphCast accurately forecasted Hurricane Lee’s landfall in Nova Scotia nine days in advance, a prediction more precise and earlier than those made by traditional forecasting models.


DeepMind Sparrow is a new AGI that is safer and more precise


In the context of a warming world, predicting extreme temperatures is increasingly critical. Google DeepMind’s GraphCast excels in identifying potential heatwaves, anticipating when temperatures are likely to exceed historical highs for any given location. This predictive capability is vital for preparing for heatwaves, disruptive and dangerous events that are occurring with greater frequency.

Google DeepMind welcomes GraphCast weather forecast AI
GraphCast doesn’t operate in isolation but works in synergy with traditional methods (Image credit)

AI-powered weather forecasting

Google DeepMind’s GraphCast weather forecast AI now claims to be the world’s most precise system for 10-day global weather forecasting, offering unprecedented capabilities in predicting extreme weather events well into the future. As climate change continues to reshape weather patterns, GraphCast is poised to adapt and enhance its performance with the integration of increasingly high-quality data.


Featured image credit: Wolfgang Hasselmann/Unsplash

]]>
The 5 Most Exciting University AI Projects https://dataconomy.ru/2017/09/20/artificial-intelligence-university-projects/ https://dataconomy.ru/2017/09/20/artificial-intelligence-university-projects/#respond Wed, 20 Sep 2017 08:00:06 +0000 https://dataconomy.ru/?p=18348 Artificial Intelligence is one of the most exciting fields of growing technology. There are incredible advancements in AI happening on a regular basis. Many of the top universities around the world are involving themselves in some very interesting and exciting AI projects. These projects cover a pretty wide range of subjects and objectives, but they […]]]>

Artificial Intelligence is one of the most exciting fields of growing technology. There are incredible advancements in AI happening on a regular basis. Many of the top universities around the world are involving themselves in some very interesting and exciting AI projects. These projects cover a pretty wide range of subjects and objectives, but they all aim to make very interesting and exciting advancements in the field of artificial intelligence. Universities ranging from the University of Washington to Carnegie Mellon to Harvard and Oxford are putting their best and brightest minds towards some very intriguing AI projects. There are a great deal of exciting and interesting artificial intelligence projects happening at universities all over the world, and these are the 5 most exciting projects.

Decision-Theoretic Control for Crowdsourced Workflows – University of Washington Paul G Allen School

Paul G Allen School of Computer Science and Engineering is home to a variety of interesting AI projects. The Decision-Theoretic Control for Crowdsourced Workflows is an exciting project that looks at the decision making process behind crowdsourced work and looks to automate it. Crowd sourced workflows are ran by companies through services like Amazon’s mechanical turk program. Companies assign small chunks of a project to a web of individuals who each check parts of the others work, ultimately resulting in a finished project that many different individuals have had a hand in. However, one of the challenges for companies employing this type of work is that it can be somewhat difficult and time consuming to create the broader plan for completing these projects. There are a lot of moving parts involved in a project with so many contributors so it can be challenging to align the web of contributors in a way that is effective towards completing the project. However, this project aims to develop an artificial intelligence that can delegate these tasks by itself, which would be a major innovation in this field.

4CAPS – Carnegie Mellon

4CAPS is a project that has been developed by Carnegie Mellon. It is a cognitive architecture that is able to account for both traditional behavioral data as well as the results of neuroimaging studies. It is an advanced cognitive architecture that is a hybrid architecture because it combines connectionist and symbolic mechanisms in its environment. It is the successor of the original CAPS architecture developed in 1982.

Aries – University of Memphis Institute for Intelligent Systems

Aries is a very exciting education related project being developed by the University of Memphis Institute for Intelligent Systems. It is an educational environment where two animated pedagogical agents hold talks with human students about various science subjects. This program is meant to engage with students and help them learn more about various parts of their science class, and will be integrated into electronic textbooks in a game like environment. This is a project with a lot of exciting possibilities in the field of education.

Harvard / IARPA Brain Study

Harvard University is studying how AI can be made to think more efficiently and faster, like human brains. They have received 28 million dollars in funding from the Intelligence Advanced Projects Activity, or IARPA, to study this subject. The general goal of this study is to understand how artificial intelligence can gain the efficiencies of the human brain which does not need to spend nearly as much time on processing subjects as artificial intelligence does. This will ideally create a synthesis between the benefits of human brains and artificial intelligence with the functionality and efficiency of the human brain and the ability for processing a great volume of data that AI benefits from.

Oxford University / Google Deepmind

Some of the top researchers from Oxford University have been hired by Google to work with their Deepmind team, which focuses on artificial intelligence projects. They will be working on image recognition and natural langue understanding. This is part of a broader partnership between Oxford and Deepmind, which has lots of exciting potential.

Overall, there are a great deal of exciting artificial intelligence projects happening at Universities around the world. From promising educational tools to in depth studies to AI planning projects, University groups are working on many different AI projects that have the potential greatly impact their associated fields.

Like this article? Subscribe to our weekly newsletter to never miss out!

]]>
https://dataconomy.ru/2017/09/20/artificial-intelligence-university-projects/feed/ 0