Author: Abdoulaye

  • The Real NoSQL? Chatting with MySQL in Plain English (and French!)


    You can now interact with a database in English—the real NoSQL 🙂

    As someone who has done a lot of database administration and written a lot of SQL queries in the past, this is the stuff of dreams!

    So this week when Google announced Gemini CLI extensions, I was particularly interested in the new database extensions which allow you to interact with MySQL and PostgreSQL using natural language. I also tried it in French, and it works, including giving the query feedback also in French.

    Here’s a guide to get started if you want to try yourself:

    Prerequisites

    • You need to have a MySQL server installed and running.
    • You will need the wget and unzip command-line utilities.

    Step 1: Download the Sakila Sample Database

    First, you need to download and unpack the sample database files from MySQL. These commands will download the zip archive and extract the SQL files into a new directory called sakila-db.

    Bash

    # Download the official Sakila database archive
    wget https://downloads.mysql.com/docs/sakila-db.zip
    
    # Unzip the archive
    unzip sakila-db.zip
    

    After running these commands, you should have the sakila-schema.sql and sakila-data.sql files inside the sakila-db directory.

    Step 2: Install the Gemini CLI

    Next, install the Gemini Command Line Interface (CLI). Open your terminal and run the following command. This will download and run the installation script.

    Bash

    /bin/bash -c "$(curl -fsSL https://storage.googleapis.com/gemini-one-comp-a-us-central1-prod-new/installer.sh)"
    

    Follow the on-screen instructions to complete the installation.

    Step 3: Install the MySQL Extension

    The Gemini CLI uses extensions to connect to different tools and services. To connect to your database, you’ll need to install the MySQL extension.

    Bash

    gemini extension install mysql
    

    You can check the installed extensions and available command by running this command in gemini cli:

    /mcp list 

    Step 4: Set up the Sakila Sample Database

    Now, we’ll load the Sakila database into your MySQL server. This database, which models a DVD rental store, will provide a good dataset for testing queries.

    1. Create the database:Bashmysql -u YOUR_USERNAME -p -e "CREATE DATABASE sakila;" (Replace YOUR_USERNAME with your MySQL username. You will be prompted for your password.)
    2. Import the schema (the table structures):Bashmysql -u YOUR_USERNAME -p sakila < sakila-db/sakila-schema.sql
    3. Import the data:Bashmysql -u YOUR_USERNAME -p sakila < sakila-db/sakila-data.sql

    Step 5: Configure the Database Connection

    The Gemini CLI needs to know how to connect to your database. It uses environment variables for this. You’ll need to set these in your terminal.

    Important: For the extension to work, you must set these variables before you start the Gemini CLI.

    Bash

    export MYSQL_HOST="localhost"
    export MYSQL_PORT="3306"
    export MYSQL_DATABASE="sakila"
    export MYSQL_USER="YOUR_USERNAME"
    export MYSQL_PASSWORD="YOUR_PASSWORD"
    

    (Replace YOUR_USERNAME and YOUR_PASSWORD with your MySQL credentials).

    Step 6: Start Gemini CLI and Query in English

    Now you’re ready. Start the Gemini CLI:

    Bash

    gemini
    

    Once it’s running, you can start asking it questions about the database in plain English. The MySQL extension will translate your questions into SQL queries and show you the results.

    Example Prompts:

    Try asking some of these questions. You don’t need to know any SQL!

    • list all tables in the database
    • show me the top 10 most rented films
    • who are the top 5 customers by total payment amount?
    • find all films in the 'Action' category starring 'NICK WAHLBERG'
    • what is the average rental duration for films rated 'PG-13'?

    You can now explore the database just by having a conversation.

    Even in French!

  • How I Built an Agriculture “Expert” with a 549MB Model

    Fun Sunday exercise, how much useful information I can squeeze in a tiny AI model. My goal was to take a general-purpose language model and turn it into an expert on a topic vital such as agriculture.

    Here are my steps:  

      1. The Foundation: Google’s Gemma 270M

     I started with this small but powerful but compact base model, gemma-3-270m-it. At just 550 MB, it’s a brilliant piece of engineering that can run on consumer-grade hardware.I am using my laptop. 

    https://developers.googleblog.com/en/introducing-gemma-3-270m

      2. The Technique: Parameter-Efficient Fine-Tuning (PEFT) with LoRA

    Instead of retraining the entire model (which is slow and resource-intensive), I used a technique called LoRA. Think of it like adding a small, highly specialized “expert module” to the model’s existing brain. The original model’s knowledge remains, but we efficiently “teach” it a new skill, in this case agricultural information. 

      3. The Curriculum: The Agriculture Q&A Dataset

     I used the KisanVaani/agriculture-qa dataset to teach the model the nuances of farming, crops, pests, and soil.

    https://huggingface.co/datasets/KisanVaani/agriculture-qa-english-only/tree/main

      4. The Result

     After a 15m training session, the new “expert module” I created was only 45 MB! That’s right. For just 45 MB, I layered deep agricultural knowledge onto a powerful base model. This process has created a specialized AI assistant that is more accurate and relevant for agricultural queries than the original.

    Model output:

    — Loading Model and Tokenizer —

    Model and tokenizer loaded successfully.

    Dataset loaded successfully from ‘/home/abdoulaye/aiplayground/agriculture_qa_dataset’.

    Device set to use cuda:0

    — Testing Base Model Performance —

    — Test Question 1 —

    Question:

    which maize disease survive well in warm and humid weather.

    Original Answer:

    Gray leaf spot

    Generated Answer (Base Model):

    The maize disease, also known as the maize blight, is a fungal disease that can affect maize plants, particularly in areas with high humidity and high temperatures. It’s a common problem in many parts of the world, and it can be difficult to control.

    ————————————–

    — Test Question 2 —

    Question:

    how can overuse of pesticides destroy soil nutrients?

    Original Answer:

    Pesticides can kill beneficial soil microorganisms and reduce soil biodiversity, which can lead to nutrient depletion and reduced soil fertility.

    Generated Answer (Base Model):

    Overuse of pesticides can be a serious threat to soil nutrients, which are essential for plant growth, soil health, and overall ecosystem function. Here are some ways pesticides can negatively impact soil nutrients:

    *   **Reduced nutrient availability:** Pesticides can disrupt the natural nutrient cycle, leading to nutrient deficiencies and reduced plant growth.

    *   **Soil degradation:** Pesticides can cause soil erosion, compaction, and altered soil structure, weakening the soil’s ability to retain nutrients.

    *   **Reduced plant health:** Pesticides can suppress plant growth, leading to stunted development, reduced yields, and increased susceptibility to disease.

    *   **Soil contamination:** Pesticides can contaminate soil with harmful chemicals, which can harm soil microorganisms, leading to soil degradation and reduced nutrient availability.

    *   **Impact on plant physiology:** Pesticides can affect plant physiology, including nutrient uptake, metabolism, and stress tolerance.

    *   **Altered soil pH:** Pesticides can alter soil pH, which can affect the availability of essential nutrients.

    This quick experiment shows small AI models can provide practical solutions. By using efficient models like Gemma and smart techniques like LoRA, we can build tools that understand various local contexts.

    The power to build specialized AI is here, and I’m excited to see what people will build in my region.

    For those interested in the technical details, I used the Hugging Face Transformers library to handle the model and the PEFT library’s implementation of LoRA for efficient training. You can learn more about them at the links below:

    * For the Hugging Face `transformers` library: This is the main documentation, the central hub for everything related to the library.

           * https://huggingface.co/docs/transformers (https://huggingface.co/docs/transformers)

       * For LoRA and PEFT (Parameter-Efficient Fine-Tuning): This link goes directly to the Hugging Face documentation for the peft library, which is what you used to implement LoRA.

           * https://huggingface.co/docs/peft/conceptual_guides/lora (https://huggingface.co/docs/peft/conceptual_guides/lora)

      

    #AI #LLM #FineTuning #Gemma #PEFT #LoRA #DemocratizeAI #AIforGood #TechInAfrica #GhanaTech #NLP #MachineLearning

  • What Happens When You Overlay Fire Data on an AI’s Map of Earth?

    I started this exercise wondering how Google DeepMind’s AlphaEarth embeddings dataset would compare to historical fire data. It turned out to be a really interesting experience.
    I learned there are a lot of fires in central Africa—especially Southern DRC, Angola, and Zambia—far more than I expected. Apparently, some fires are set on purpose as part of an agricultural practice.
    Still, it’s worrying to see so many fires, especially when you think about the ones recently in the news across North America and Southern Europe. Funny enough, I hadn’t even noticed the pattern until my 9-year-old son pointed out the region to me while we were checking the map.
    I find the AlphaEarth dataset fascinating, as you can see a clear correlation between the type of landscape and where fires happen.
    I used Gemini to generate the boilerplate JavaScript for Google Earth Engine, and I had this visualization running in a few minutes. Gemini is quite good at generating JavaScript for Google Earth Engine.

    Video caption: The colorful background is an AI’s unique “fingerprint” of our planet, where landscapes with similar characteristics get similar colors. Overlaid on top, the yellow/orange/red dots are real fire hotspots detected by NASA satellites over the last year with a confidence level set at 80%.

    Dataset Credits:

    Code I Used:

    Link: https://code.earthengine.google.com/de7448ec0c632156d18902053bbda78f

    Video caption: The colorful background is an AI’s unique “fingerprint” of our planet, where landscapes with similar characteristics get similar colors. Overlaid on top, the yellow/orange/red dots are real fire hotspots detected by NASA satellites over the last year with a confidence level set at 80%.

  • AlphaEarth Foundations: Where AI Meets Earth Observation for Unmatched Detail

    🤯 This incredible image showcases the stunning beauty and diversity of the African continent, generated using the new AlphaEarth Foundations dataset on Google Earth Engine. So what is this dataset all about? 

    Imagine being able to X-ray the entire Earth across multiple years, even seeing through clouds! Dealing with clouds in remote sensing is a huge challenge (something I know well from my Open Buildings research project). The AlphaEarth team has essentially created a “virtual satellite” capable of doing just that. To achieve this, the AlphaEarth team combined vast amounts of data from dozens of public sources, including optical satellite images, radar, 3D laser mapping, etc.. weaving it all into a seamless picture.

    Even after just a few minutes of exploring the dataset, I’ve stumbled upon fascinating insights. For example, why have Central Mali or Lake Kalala in Zambia changed so much? There’s likely a clear explanation, though I don’t know it yet.

    This open dataset release is a huge step forward, likely to help scientists and experts make more informed decisions on critical global issues like food security, deforestation, urban expansion, and water resources.

    If you think you can leverage this dataset for your research on our changing world, consider applying for the Satellite Embedding Grant. (Link below)

    Paper: https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaearth-foundations-helps-map-our-planet-in-unprecedented-detail/alphaearth-foundations.pdf

    Google Deepmind Blog: https://deepmind.google/discover/blog/alphaearth-foundations-helps-map-our-planet-in-unprecedented-detail/

    Google Earth blog: https://medium.com/google-earth/ai-powered-pixels-introducing-googles-satellite-embedding-dataset-31744c1f4650

    Demo: https://code.earthengine.google.com/?scriptPath=Examples%3ADatasets%2FGOOGLE%2FGOOGLE_SATELLITE_EMBEDDING_V1_ANNUAL

    Dataset: https://developers.google.com/earth-engine/datasets/catalog/GOOGLE_SATELLITE_EMBEDDING_V1_ANNUAL

    Grant application: https://docs.google.com/forms/d/e/1FAIpQLSfxnmqM2PEKdphTWXh44jsy83SRBkn0grjg6shRS-mLJTsKrQ/viewform

    Google Earth Engine screenshot showing world embeddings of 2024.
    Screenshot of Google Earth Engine showing similarities between years. (white spots are where most changes happened.
    Changes in central Mali
    Changes at Lake Kalala, Zambia
  • First Images from ESA Biomass Satellite

    Absolutely stunning images of Gabon and Tchad from the European Space Agency’s (ESA) Biomass satellite.

    The first image shows the Ivindo River in Gabon, stretching from the DRC border all the way to Makoukou in the Ogooué-Ivindo province. This region is known for its dense forests. Typically, when we look at forests from above, all we see are the treetops. However, Biomass uses a special kind of radar, called P-band radar, which has the ability to penetrate through the forest canopy to reveal the terrain below. This means it can measure all the woody material—the trunks, branches, and stems—offering a much more complete picture than ever before.

    The second image features the Tibesti Mountains in northern Chad, and it looks like something straight out of space. Here, the radar demonstrates its ability to see up to five meters beneath dry sand. This opens up fascinating possibilities for mapping and studying hidden features in deserts, such as ancient riverbeds and lakes that have long been buried. Such insights are incredibly valuable for understanding Earth’s past climates and even for locating vital water sources in arid regions.

    It’s an exciting time as our ability to collect information about Earth continues to advance, especially with progress in remote sensing and Artificial Intelligence (AI). The rise of geospatial AI, in particular, is opening up fascinating new avenues for understanding our planet and opening new fields of research.

    If you’re a student considering a career in understanding Earth through technology, leveraging AI. In my opinion, this field presents some interesting opportunities. You can explore more about the amazing Biomass mission on the official ESA website:

    https://www.esa.int/Applications/Observing_the_Earth/FutureEO/Biomass/Biomass_satellite_returns_striking_first_images_of_forests_and_more

    Image credit: ESA

  • AlphaGenome API

    🧬

    For those in genomic research: Google DeepMind has released AlphaGenome, an AI model for predicting DNA sequences. The API is free for non-commercial research use.

    Feel free to share this with anyone in the field who might be interested.

    You can get the API key here: https://deepmind.google.com/science/alphagenome/account/terms

    Blog: https://deepmind.google/discover/blog/alphagenome-ai-for-better-understanding-the-genome/

    Colab examples: 

    Quickstart: https://colab.research.google.com/github/google-deepmind/alphagenome/blob/main/colabs/quick_start.ipynb#scrollTo=81ffd5da

    Vizualizing predictions: https://colab.research.google.com/github/google-deepmind/alphagenome/blob/main/colabs/visualization_modality_tour.ipynb#scrollTo=ou8Yju8s-I0R

    #AI #Genomics

  • My Notes on Exploring Google’s Health Foundation Models

    (Note: This post reflects my personal opinions and may not reflect those of my employer)

    Article content
    Example of the HeAR encoder that generates a machine learning representation (known as “embeddings”)

    This image is a spectrogram representing my name, “Abdoulaye,” generated from my voice audio by HeAR (Health Acoustic Representations). HeAR is one of the recently released Health AI foundation models by Google. I’ve been captivated by these foundation models lately, spending time digging into them, playing with the demos and notebooks, reading ML papers about the models, and also learning more about embeddings in general and their usefulness in low-resource environments. All of this started after playing with a couple of the notebooks.

    Embeddings are numerical representations of data. AI models learn to create these compact summaries (vectors) from various inputs like images, sounds, or text, capturing essential features. These information-rich numerical representations are useful because they can serve as a foundation for developing new, specialized AI models, potentially reducing the amount of task-specific data and development time required. This efficiency is especially crucial in settings where large, labeled medical datasets may be scarce.

    • If you would like to read further into what Embeddings are, Vicki Boykis’ essay is such a great free resource; this essay is also ideal to learn or dive into machine learning. I know many of my previous colleagues from the telco and engineering world will love this: https://vickiboykis.com/what_are_embeddings/
    • For a technical perspective on their evolution, check out the word2vec paper: https://arxiv.org/abs/1301.3781

    The HeAR model, which processed my voice audio, is trained on over 300 million audio clips (e.g., coughs, breathing, speech). Its application can extend to identifying acoustic biomarkers for conditions like TB or COVID-19. It utilizes a Vision Transformer (ViT) to analyze spectrograms. Below, you can see an example of sneezing being detected within an audio file, and later, throat clearing detected at the end.

    Health event detector demo

    Article content
    Health event detector demo

    This release also includes other open-weight foundation models, each designed to generate high-quality embeddings:

    Derm Foundation (Skin Images) This model processes dermatology images to produce embeddings, aiming to make AI development for skin image analysis more efficient by reducing data and compute needs. It facilitates the development of tools for various tasks, such as classifying clinical conditions or assessing image quality.

    Explore the Derm Foundation model site for more information and to download the model use this link.

    CXR Foundation (Chest X-rays) The CXR Foundation model produces embeddings from chest X-ray images, which can then be used to train models for various chest X-ray related tasks. The models were trained on very large X-ray datasets. What got my attention, some models within the collection, like ELIXR-C, use an approach inspired by CLIP (contrastive language-image pre-training) to link images with text descriptions, enabling powerful zero-shot classification. This means the model might classify an X-ray for a condition it wasn’t specifically trained on, simply by understanding a text description of that condition which i find fascinating. The embeddings generated can also be used to train models that can detect diseases like tuberculosis without a large amount of data; for instance, “models trained on the embeddings derived from just 45 tuberculosis-positive images were able to achieve diagnostic performance non-inferior to radiologists.” This data efficiency is particularly valuable in regions with limited access to large, labeled datasets. Read the paper for more details.

    Retrieve images by text queries

    Article content
    Retrieve images by text queries demo

    Path Foundation (Pathology Slides) Google’s Path Foundation model is trained on large-scale digital pathology datasets to produce embeddings from these complex microscopy images. Its primary purpose is to enable more efficient development of AI tools for pathology image analysis. This approach supports tasks like identifying tumor tissue or searching for similar image regions, using significantly less data and compute. See the impressive Path Foundation demos on HuggingFace.

    Article content
    Path foundation demos

    Outlier Tissue Detector Demo

    These models are provided as Open Weight with the goal of enabling developers and researchers to download and adapt them, fostering the creation of localized AI tools. In my opinion, this is particularly exciting for regions like Africa, where such tools could help address unique health challenges and bridge gaps in access to specialist diagnostic capabilities.

    For full acknowledgment of contributions from various institutions, including partners like the Center for Infectious Disease Research in Zambia, please refer to the detailed in the paper.

    For those interested in the architectural and training methodologies, here are some of the pivotal papers and concepts relevant to these foundation models:

    #AIforHealth #FoundationModels #GlobalHealth #AIinAfrica #ResponsibleAI #MedTech #Innovation #GoogleResearch #Embeddings #MachineLearning #DeepLearning

  • Visualizing equations and functions using Gemini and Three.js (Vibe coded )

    Visualizing Machine Learning: An Interactive 3D Guide to Gradient Descent & SVMs

    From Gaussian Curves to the Heat Equation

  • TxGemma Release: AI Models for Therapeutics Development 🧪🔬

    Google DeepMind has released TxGemma, a set of open-weight AI models designed for therapeutic development. These models, based on the Gemma architecture, are trained to analyze and predict characteristics of therapeutic entities during drug discovery. 💊

    The release includes ‘chat’ variants (9B and 27B) that can engage in dialogue and provide explanations for their predictions. Additionally, Agentic-Tx demonstrates the integration of TxGemma into an agentic system for multi-step research questions. 🤖

    A fine-tuning notebook is available for custom task adaptation:

    Execution is possible on a free T4 GPU after license acceptance and Hugging Face token provision:

    If you encounter issues with the provided fine-tuning notebook, you can check my pre-configured Colab notebook:

    Further resources:

    Credit for this release: Shekoofeh Azizi and other contributors. 🎉

  • Gemma 3: Massive Context, 35+ Languages, and Multimodal Capabilities

    🚨 Gemma 3 is out! It’s a family of open AI models (1B-27B parameters) featuring a 128k token context window (can work with very long documents and conversations), multilingual support (35+ languages, trained on 140+), and single GPU/TPU compatibility. I’m excited about its potential to increase accessibility to advanced AI models, especially in resource-constrained settings, and the multimodal capabilities that can enable diverse applications.

    Blog: https://blog.google/technology/developers/gemma-3/

    Technical report: https://storage.googleapis.com/deepmind-media/gemma/Gemma3Report.pdf

    Developer guide: https://developers.googleblog.com/en/introducing-gemma3/