BEYOND AI.
40 MINUTES A DAY.

'The paradox is that the new is new. In this sense, one cannot buy or sell something new. You can only develop it.' - Prof. Dr. Sabine Fischer

sol, the solar day on Mars, is 40 minutes longer than on Earth. Same goes for our days at Birds on Mars. That is more than 3 hours a week we all spend on learning, creating, innovating, hacking, making & re-thinking.

 
Webseite_work_visuel_video_cookies (8000 × 3000 px) (4).png

sol can be a place for driven improvisation and creative freedom where we live our slogan
"Connecting Intelligences".

sol incites new, unseen and unheard perspectives on AI and the in-between spaces and tones of human, organizational and artificial intelligence.

It is an open space for innovation, irritation, thinking and playing where AI can meet climate and nature, city and community. It connects AI with social and political work as well as arts.

sol can be a space for products, thoughts, journeys and radical moments.

Over the time, sol has turned into a network of people and visions, experiences and adventures, the space to turn things upside down and to create new links.


MULTIMODAL AI SYNTHESIZER - #GENERATIVEAI

Together with the electronic ID (eID) musical ensemble, we set out to create an interactive experience for their audiences. The result is a one-of-a-kind modular synthesizer that employs generative AI to facilitate a dynamic connection between movement, sound, speech, image, and video inputs. Users have the power to mix, match, manipulate, and apply effects to content, just as they would with a traditional synthesizer. Through the synthesizer, visitors to the eID website can unleash their creativity and actively participate in the artistic development process alongside the musicians. This hybrid connection transforms the audience from spectators to active collaborators with eID. Don't take our word for it – try it out yourself and join the fun! The framework operates in-browser on the frontend and can be utilized for various AI-driven experiences, making it easily transferable to other projects.


QUANTIFIED TREES (QTREES) - INTELLIGENT IRRIGATION FORECASTING FOR URBAN TREES

Funded by the Federal Ministry for the Environment, Nature Conservation and Nuclear Safety (BMU) based on a resolution of the German Bundestag.

In Berlin today there are 7000 street trees less than two and a half years ago! Their lifespan is also rapidly declining.

In cooperation with CityLAB Berlin of Technologiestiftung Berlin and Straßen- und Grünflächenamt Mitte, we have received funding from the Federal Ministry for the Environment to develop machine learning models that can be used to predict the needs of individual tree irrigation. The results are to be presented in an interactive monitoring tool that can serve as a planning basis for phytosanitary and green space offices. In addition, citizens are to be enabled to provide helpful support.

Berlin street trees are increasingly suffering from the effects of climate change. Drought periods in summer and increased heavy rainfall, as well as the accumulation of temperature extremes, put a strain on the urban ecosystem. The municipal green space offices cannot guarantee sufficient irrigation of all street trees in summer, which increasingly endangers their existence.

In the meantime, even the citizens are regularly called upon to support them. This support has so far been largely uncoordinated and is based on poor information.

In a flagship project focusing on Berlin Mitte, we want to develop an artificial intelligence (AI)-supported forecasting system that identifies acutely endangered trees at an early stage and can thus lead to more efficient irrigation concepts. The models developed will be based on a wide range of data and will provide decision support for administrations and help them to prioritise irrigation. Based on the prediction model, a prototype for an online app will be developed, which enables a coordinated citizen participation in the irrigation of urban green areas.


Screenshot Krach.AI

KRACH.AI - SYNTHETIC SOUND MANIPULATION. A WHOLE NEW WORLD OF EXPLORING SOUND, VOICE AND LANGUAGE IN LATENT SPACE TO CREATE UNHEARD NOISE.

Just like our sense of smell or touch, hearing is closely linked to emotions. The world of sound extends from the universe that surrounds us, the street we live in and the languages we speak to our personal playlist and the songs we sing to our children.

Sound grabs space, it is connected with power and at the same time really anyone can produce it. Great works of rhythm and music have been created by the poorest societies. Sound has always had great democratizing potential.

With Krach.ai we use the versatile facets of sound to allow a step into latent space for various people. We’re thrilled that it has already been used as a tool for several artists. They play the AI’s voice like an instrument changing its speed and pitch, glitching its enunciation, altering intonation and emotional resonance, exploring rhythms, tones and narratives, that have never been heard before. And we can’t wait for more first encounters with Krach in the endless space of sound manipulation!

Krach.ai arose from our collaboration with electronic musicians Mouse on Mars and brought up among others the album AAI released this Feb. 26th!

Portrait xo is an artist creating hybrid works between sound, technology and new media using the AI for the „AI for self-reflective co-creation experiment“.

“Sir Neuron 1068” is a radio play by artist Anina Rubin. It is an artistic, poetic examination of a neural network with narrative passages, dialogues and several musical parts coming soon!

Synthetic Pulsar by Marcin Pietruszewski and Alex Freiheit probes the synthetic potential of the pulsar as an integrative object functioning across and within disciplines of astrophysics, technology of sound, and computational speech design.


AI Skin - Evolving AI artwork

AI(SKIN) - PROFESSOR COFFEE MACHINE

A performative and evolving AI artwork by Alexander Iskin and Birds on Mars.

As a teenager, managed to find a doorway to art in the medieval town of Goslar. At the local art museum, he invited exhibiting artists Jonathan Meese and Herbert Volkmann to see some of the first paintings he made in his small child’s room. They liked his style, invited him to move to Berlin and into a studio with Volkmann, who shared his interest in literature and art historical references.

Having moved to Germany from Russia with his parents in 1992, Iskin is a child of Generation Y, always loved the infinity and promises of digital worlds. After Herbert Volkmanns death in 2014, he started creating new spaces he called Interreality - referring to the multi-perceptivity and multi-perspectives of our present reality, the interactions that are simultaneously taking place, that are constantly creating this collective fantasy we are living in. We connect his ideas with artificial intelligence.

’Embryo‘ the first performance of the series was presented at the Mönchehaus Museum in Goslar. ‘Child’ is shown as part of the opening of SEXAUER Showroom in Charlottenburg. Professor coffee machine child’s intelligence has been fed by Birds on Mars with hundreds of paintings and is now able to react in a childlike way to paintings of Alexander Iskin. She is a growing intelligence moving (yes, she moves) through our interreality and allowing us to deal with feelings of affection, loss of control and unfair judgment.

’The professor is a projection screen for topics that have long become relevant through artificial intelligence, but are still far from being understood. Art can enable us to reflect on technology in many different ways.’ - Hartmut Wilke.


Water on Mars

WATER_ON_MARS

Without water, there is no life on earth. Water is habitat to a multitude of life forms and all physiological processes take place in an aqueous environment. It covers about two-thirds of the earth's surface, mostly as salt water (97.4%). All biochemical reactions require water as a lubricant. In addition to its vital functions, water has ever been an aesthetic fascination and a point of reference for artistic creation.

For STATE Studio and in cooperation with photographer Gabriele Neugebauer we trained an AI to learn form, color, and space of water by analyzing 200 of Neugebauer’s photos. The photo series shows the one-week learning process of a GAN, not only by sketching connections between the photographic training data and the new AI pictures but also by illustrating how the AI apparently further develops motifs and creates new fictitious water worlds.

The human perception of water is being transformed by the technology into a visual interpretation from an “artificial point of view". We showed the results of the AI in different phases of its learning process and a video loop of the machine trying to understand and recreate the visual essence of the photographs and the depicted motif as well as the training data.

BABEL BIRD - LINGUISTIC DIVERSITY

The Babel Fish is a fictional creature from the novel Hitchhiker's Guide to the Galaxy by Douglas Adams. In the novel, the Babelfish can be inserted into an ear, enabling an understanding of all spoken languages. Since we don't want to wait until the Babel Fish rushes over to us from a far-distant galaxy, we have now developed the Babel Bird! 

The Babel Bird is a tool that translates the spoken word into writing. Fully automated, in real-time and high quality. That way, we can overcome linguistic hurdles thanks to artificial intelligence and act even more firmly together. Babel Bird enables linguistic flexibility and diversity in all directions!


256PIXELS - ENDLESS, INTERACTIVE GRAFFITI ART IN PUBLIC SPACE

Together with OneZeroMore (OZM), an urban art gallery from Hamburg-Hammerbrook, we want to get people even more enthusiastic about urban art and turn the world of trainwriting upside down! How? By using the potential of AI and creating the first digitally generated graffiti of its kind. The Hamburg railway, which is often unwanted used as canvas by graffiti artists, will be also focus of our project. But this time we use the movement patterns of the trains as input for continuously generating new digital graffities. No color, just data. That way, the passengers become part of the installation themselves by using public transport. The projection takes place with 1094x256 pixels on a huge shipping container, which is located on the roof of OZM. The impressive LED screen will be created by the artist and YouTuber bitluni. Hundreds of digitized works by various OZM street artists serve as the database and training material for our inspiring AI. This will create an interplay of shapes and colors. New artistic ties are made and translated into collective color landscapes in real-time. The action can currently be marveled in a fully functional simulation environment.

To realize the project in its original size, OZM will create a Kickstarter campaign for financing the project at the end of 2021. We are excited to see the first pixels rolling on the rail very soon.


AAI: ANARCHIC ARTIFICIAL INTELLIGENCE - MOUSE ON MARS + BIRDS ON MARS

Sound meets AI, Birds on Mars meets Mouse on Mars.

Because they say the weird-named get along well and because we feel like a band ourselves, we plucked up courage and contacted Mouse on Mars. Few weeks later, we are invited to be part of their project ‘AAI’ surrounded by a remarkable team: Louis Chude-Sokei, Rany Keddo, Derek Tingle, Dodo NKishi, Andi Toma and Jan St. Werner.

With 'AAI' we generate an anarchic AI language and a musical human response to it - a new album by Mouse on Mars!

Caribbean poet Edward Kamau Brathwaite argued that music is rooted in and structured by dialect, that the patterns of a people’s voice give rise to the distinct sounds, noise and rhythms of their music. With that in mind, our goal is to create a new voice, a composite one. It is generated via an algorithm that focuses on deep learning of different dialects, vernaculars and slang. 

One can easily imagine an AI speaking in a standard language. We create an AI that masters, blends and even invents new accents and slang. The sound of that new dialect can be understandable but also cryptic, onomatopoetic, at different speeds, changing in volume in the way human speech works instead of the robotic monotone we attribute to artificial voices.

For that voice Mouse on Mars created a music, because to invent a different language is, as Brathwaite argued, to invent a different music.

First single: The Latent Space - Mouse on Mars
Album ‘AAI’ coming February 26th, 2021!

The team: 

Louis Chude-Sokei: words, voice, AAI
Rany Keddo, Derek Tingle, Birds on Mars: programming, AAI
Dodo NKishi: drums, percussion, AAI
Andi Toma, Jan St. Werner: electronics, instruments,
production, AAI


Stadt Gan Fluss: AI and Machine Learning for Urban Planning

STADT_GAN_FLUSS - ALGORITHMISCHE STADTVISIONEN

Intelligent systems can also be used to test what the city of the future will look like: In this project, AI and machine learning are used for architecture and urban planning. On the basis of a data set consisting of around 10,000 city maps, a neural network is trained to create its own fictitious urban designs.

In the CityLAB exhibition, the first urban visions are already being visualized by a huge plotter, while the network is being retrained and an interactive application is being developed to make the abstract topic of AI tangible for the visitors.

Together with CityLAB we organize the workshop series "AI for Berlin" to put concrete ideas on urban AI into practice. From brainstorming and conception to prototyping, design and implementation, intelligent solutions for Berlin are worked on in interdisciplinary teams. Whether you already have an idea, are Python enthusiasts or simply want to take the chance to experience bleeding edge concepts for your city - the sessions are open to anyone and will be announced via CityLAB.



Birds on Earth- Bird detector Network


BIRDS ON EARTH - BIRD DETECTOR NETWORK

In this repository we provide PyTorch code and pretrained networks to

  • classify bird geni based on their calls

  • classify the urban sounds dataset

  • use a pretrained CNN and fine-tune it on any sound classification task.

Our implementation is based on the VGG-like audio classification model VGGish. We also converted the original pretrained VGGish network from Tensorflow to PyTorch. The pretrained network is based on the AudioSet Dataset.

Our longterm goal is to create a tool that is not only able to detect Birds on Earth but to even generate Birds on Mars based on whistling.


ARTIFICIAL MUSE - BUILDING AN INSPIRATIONAL AI

Like the invention of the camera helped artists explore new perspectives, artificial intelligence used in this manner can ultimately introduce a new way of seeing the world.

Roman Lipski, artist collective YQP and Birds on Mars are furthering the discourse around artificially creative systems at a time when the interactivity between man and machine is becoming commonplace.

The Artificial Muse is unique in the world - an Inspirational AI. Contrary to what is called Creative AI, aiming to develop AI systems that are capable of creating "works of art" - photos, images, texts, scripts, music..., Inspirational AI helps to become a new artist or (art) worker. The Muse pushes Lipski, challenges him, becomes part of himself, in dialogue, in symbiosis, at the "in-betweens" and every connection.

The idea is to develop a new tool, an "instrument" like a new generation brush seamlessly integrated into Lipski's studio environment, his daily work. The current prototype consists of different data sets and models as well as the option of using video input as material. All these components - similar to a modular system in music production - are interconnectable to form (feedback) loops and pipelines.

In the future, it will be a question of combining human and artificial intelligence in their respective strengths, researching and designing the connections and spaces between them. And as is often the case, art is a pioneer.

Lipski is one of the first people in the world collaborating with the AI on a daily basis to be creative together and explore a new new. This for nearly 3 years. We have already learned a lot and are still learning new things every day. One being that Lipski and Muse, man and machine, begin to develop their very own language, word for word, brush stroke for brush stroke and that even exponential times sometimes need time.

COLORFEEL DATA

ColorFeel Data is a research and art project in collaboration with the UdK Berlin and the Museum für Werte. To better understand the relation between colours and emotions, we developed a web app with which the test persons can collect data on their feelings-colour perception. 

Colours influence our moods. That is strongly related to our history and cultural imprinting. But what happens when we look at this interplay from a new perspective? When we no longer ask which feelings we associate with which colour and instead assign a colour to feelings? Not all of us find it easy to name our feelings through linguistic symbols. Through colours, we can access our feelings more directly. A project that opens up a wide range of possible uses through a colourful data set!


BIRD BOX - LEARNING CONSTRUCTION KIT

Did you ever wonder how to make your company AI-ready? With our prototypical Learning Construction Kit we want to provide organizations with an interactive and fun way of planning their own customized Data &  AI workshop and/or course experience. Pick a category and topic, lay down the cards and sort them collaboratively as it meets your demands. All the basic information you need - time, description, learning goal etc. - are shown on the cards. By sorting and adjusting with management, learning and domain experts this manual planning leads to a better and tailor-cut enablement.  We are also working on a digital version of the Bird Box. Stay tuned!


AI FOR KIDS PART 2 - COLLABORATION WITH DIAKONIE ROSENHEIM

What's cooler than Artificial Intelligence? Right: it's kids! That's why we're happy to show once again the unexplored possibilities of AI in our project: "KI-Kiste".

In collaboration with the Diakonie Rosenheim, we are working on projects that bring together social purpose, care work, and the fantastic world of AI.

To nourish children's creativity and encourage them to express their imagination in words, we created the idea of the "KI-Kiste". We want children to get excited about the beautiful tradition of storytelling. How does it work? Children place their favourite toy into the box, which the AI recognizes. They can then tell their stories about the object and listen to the stories recorded by other children. Data is secure and a social tradition of storytelling becomes established among the kids.

What once started as our project, "KIKI" is now proudly entering the next stage. We are honoured that the "KI-Kiste" won a prize in the idea competition CIVIC "Gemeinsam wird es KI". 


KALEIDOFON - INCLUSIVE INTERFACE FOR ARTISTIC SOUND WORK

"Kaleidofon"s idea is to support people in transforming their own speech, singing or sounds into new music and sound works. In doing so, the AI aims to adapt especially to its users' individual preferences and abilities. It can therefore provide people with disabilities with individualized access to expressing themselves in music and arts and supports more diverse artistic expressions, possibilities of collaboration and discourses.

Kaleidofon can be used for live jamming as well as for creating music alone.

With this tool, we would like to refer to and help shape article 30 UN BR to enable equal participation and representation for people with disabilities in the cultural sector.

Kaleidofon is a collaboration with barner16 - Hamburg-based network of over 100 people with and without disabilities working in culture and arts.

Our idea won at the CIP “Ideenwettbewerb” by Bundesministerium für Arbeit und Soziales!


AI FOR KIDS PART 1- COLLABORATION WITH DIAKONIE ROSENHEIM

Children and Artificial Intelligence?

Together with Diakonie Rosenheim we started a project in which we use AI in an age-appropriate and supporting way for the education of creative and social skills, as well as making the topic easily accessible for the different parties involved in the context of education.

First of all, we developped a toy: an ‘artistically intelligent photo box’. Together with children, their educators and parents of Bildungshaus Bad Aibling and KiTZ Neuperlach, we tested it.

KIKI is only the start of our AI for Kids initiative and we are also searching for supporters! There are various possibilities to contribute to the project. If interested please don’t hesitate to contact Dr. Hartmut Wilke for information.

Let’s draw together from one of the most incredible Intelligences there is - child intelligence!


New flowers with generative AI

GANS N' ROSES - IMAGINE ALL THE FLOWERS

Nature shows us an unspeakable variety of flowers and yet man cultivates them to grow even more beautiful flowers. GANs N' Roses is the continuation of gardening in digital space: With the help of generative AI, new types of flowers are created, which seem strange at first, but are the beginning of another flower evolution. Artificial intelligence for the imagination of the new New and not "just another" or fake.

GANs N' Roses is a joint venture between Flamboyant AG, based in Switzerland and sol. Together we explore how artificial, flamboyant flowers can be produced with the help of generative adversarial networks (GANs) on the basis of a specially bred and curated training set of about 10,000 flowers. It is based on mainly self-sowed and self-cultivated flowers over years.


Canairy - Artificial Intelligence Technology for Airflow Coordination during Covid-19

CANAIRY - EXPLORE AI TECHNOLOGY AND COORDINATE YOUR AIRFLOW IN TIMES OF COVID-19

"You don’t smell it, you don’t see it, but when the canary falls off the perch, there’s something wrong with your airflow!"

- Just like the workers in the coal mines, that used canaries to be warned of deadly gases, you can use the animals in today's workspaces - No bird will be hurt, we promise!

Canairy is our new open source application provided to you to coordinate your airing and prevent viruses from spreading within your team. To solve the problem every company is currently facing we use "Apache Airflow", a workflow management platform that is on everyones lips when it comes to the productive and sustainable development of AI. So it is also a framework and an easy access for your team to exploring AI technology and devices.

Just install the software on a Raspberry Pi, set the times of your "airflow" according to your individual conditions and be reminded regularly to air your place! How? - Let canairy sing.

Join in, bring airflow into your environment and help us develop the next intelligent level with integrated CO2 sensors. We need your ideas!