Tuesday, November 22, 2022
AI for Good News
AI for Good News

The Most Important AI for Good Trends of 2022 + Some Dangers

The rise of open research communities usher in a power distribution in AI research and accessibility, AI safety is slowly being taken seriously and AI assists fundamental breakthroughs in various natural sciences! This article describes the most important AI for Good trends of 2022 and what they mean to you.

While the world is in the grasp of war and climate change the latest Stateof.ai report came like a breath of fresh air. Showing AI trends that gave us hope and restore faith in humanity - here’s three trends that we see pushing forward AI for Good.

Open research & power distribution

While it seemed at some point that cutting-edge AI like LLMs would be dominated by the big few - because of massive training costs and data needs - decentralized research collectives are slowly gaining “market share” in AI research. Big examples are Stable Diffusion, open-sourcing an image generating DM at a training cost of “only” 600k USD.

Three images generated by Stable Diffusion by Stability AI; a community-driven, open-source AI company that has set out to solve the "lack of organization" in the open-source AI community

What it makes us think

What makes humans great has always been our ability to flexibly collaborate. Neanderthals had bigger brains, but we were more likely to share our knowledge and tools - we’re glad that humanity is again playing into their biggest strength!

What this means for you

In our bootcamp and challenges we put a strong focus on transfer learning and building on existing tools. The open sourcing of these previously closed of models allow us to use them in AI for Good. And with more startups being able to apply these cutting edge technologies (many friends of the community) you will also get the chance to work in the cutting-edge of AI.

Advances in science due to AI

AI is solving long standing problems left and right in science. Every discipline from applied math: discovering a more efficient way of matrix computation (DeepMind’s AlphaZero) to energy: using RL to stabilize plasma in fusion reactors (@Lausanne’s TCV) to medicine: protein folding (DeepMind) & treatment plan optimization to even recycling: discovering enzymes that can break down plastic (UTA). But also more “meta” topics like the use of LLMs to help AI use other tools like websites, apps and robotic equipment (don’t tell your mom).

A protein folded by DeepMind's "Alphafold" - they've solved a 50 year old problem in biology and are set to release the structure of every protein known to humanity, vastly speeding up disease and drug research.

What this means for you

No matter what discipline you’re in, you will be using AI. So, great that you’re here! But you won't perse need to be a fundamental AI researcher to reap the benefits.

What it makes us think

With the "vulnerable world hypothesis", Oxford Prof. Nick Bostrom warns for the dangers of truly destructive technologies becoming cheap and simple — and therefore exceptionally difficult to control.

While the breakthroughs highlighted show nothing but goodness, a model used for drug discovery can easily be used to develop the next generation of deadly, hard to detect nerve agents or killer viruses. Developing a weapon like this once needed a lot of resources and the leading scientists in the field, but can now be achieved by "just" transfer learning a ML model. This is different from previous destructive forces like the atomic bomb.

It also makes us think that we need to start doing applied science AI for Good challenges! We’ll get right on it :)

AI safety is booking tangible results

The infamous black box is being cracked, for example using RL acting on human feedback (RLHF) to adjust LLMs in their responses or direct human language feedback to adjust models (NYU). Since it can be quite argues to manually test models, “red teaming” has been developed by DeepMind: using LMs to provoke unsafe / power-seeking behavior in LM's and adjust them automatically.

Two graphs showing the polarization f the political spectrum in 1994 vs 2017, increasing between the two
These graphs show that between 1994 and 2017, polarization, measured in the overlap between voting standpoints in the left and right have greatly increased. Some attributed this to the use of recommender systems.

What it means for you

AI safety is finally being taken seriously, meaning a research career or even startup in the field is feasible!

What it makes us think

While it’s great that there’s attention for AI with “malicious intent” like power-seeking behavior we’d like to see a stronger push to tackle arguably the biggest AI threat to humanity right now: recommender systems and the like influencing our behavior by optimizing for our reactiveness. Victor Frankl said: “Between stimulus and response there is a space. In that space is our power to choose our response. In our response lies our growth and our freedom.”

The most clear sign of this space reducing is political polarization, made possible due to filter bubbles (see image). The ill intent or even just negligence of companies can topple democracies. To solve this government legislation and lobbying is needed... Interestingly, we can now also use deep reinforcement learning to craft policies likely to be accepted by governments - so maybe AI offers a solution too!

We hope that this short summary of what we think are the most important trends has given you the same lightness in your step as it did for us. Be sure to check out the full report of Stateof.ai and as always…

Have a fruitful day!

Your Chief Cheerleader Buster

Subscribe to our newsletter

Be the first to know when a new AI for Good challenge is launched. Keep up do date with the latest AI for Good news.

* indicates required
Thank you!

We’ve just sent you a confirmation email.

We know, this can be annoying, but we want to make sure we don’t spam anyone. Please, check out your inbox and confirm the link in the email.

Once confirmed, you’ll be ready to go!

Oops! Something went wrong while submitting the form.

Previous publication

You are reading the most recent publication!

Return to publications page

Next publication

You have reached the end of our publications, you must love our content!

Return to publications page
Challenge results
Challenge results
Learning Resources
Learning Resources
Artificial Intelligence
Artificial Intelligence
AI for Good
AI for Good
LLMs
LLMs
Computer vision
Computer vision
NLP
NLP
AI for Good News
AI for Good News
Community story
Community story
Segmentation
Segmentation
Object detection
Object detection
Deep Learning
Deep Learning
GIS
GIS
Remote Sensing
Remote Sensing
AI for Earth
AI for Earth
Explainable AI
Explainable AI
Time Series Forecasting
Time Series Forecasting
AI for Health
AI for Health
Autonomous
Autonomous
TinyML
TinyML
MLOps
MLOps
Edge Computing
Edge Computing
AI for Wildlife
AI for Wildlife