top of page

Between precision and pixels

Aktualisiert: 18. Apr.

April 8, 2025

ChatGPT and Führding-Potschkat (2025). Between precision and pixels: Why AI-generated illustrations for biology are (still) not sufficient. [Accessed 2025, April 8]. Retrieved from https://www.openai.com/chat.

Article generated in dialogue with Dr. Petra Führding-Potschkat/AI WitchLab via OpenAI ChatGPT. AI-generated text, reviewed and edited by the author.

Image: Artificial General Intelligence Illustration. David S. Soriano


Why AI-generated illustrations for biology are not (yet) good enough

From an outside perspective, it sounds tempting: You enter a few keywords – "leaf cross-section, black and white, labeled" – and a few seconds later, an artificial intelligence delivers a finished image. But anyone working in a scientific context quickly realizes: There's a gap between "looks like" and "is technically correct."


A self-experiment with ChatGPT and DALL·E is sufficient for a quick visual impression. But for robust technical use? Not by a long shot.


The core of the problem: AI has no biological understanding

Contrary to what many assume, an AI like DALL·E doesn't "understand" what it's drawing. The models do not recognize biological structures in the scientific sense. Instead, they combine purely statistical image patterns from the training material, regardless of whether they are correct, outdated, or incorrect.


This leads to important details being mislocated or not recognized at all, such as:

  • a vascular bundle that suddenly encompasses the entire sponge tissue,

  • a stoma that is missing or diffusely displayed,

  • or labeled lines that lead nowhere in the image.

  • Labels: Rarely useful


Precise, error-free labeling is essential, especially for scientific diagrams. However, AI-generated images:

  • place labels inaccurately or on incorrect structures,

  • mix languages ​​(e.g., "epidermis" instead of "epidermis"),

  • and often fail to indicate the direction of arrows correctly.

This is useless for teaching materials or publications – precision counts here.


No feedback, no correction – only trial and error

A classic workflow with an illustrator allows for targeted feedback: "Please draw the vascular bundle smaller, correct the cell shape of the palisade tissue, and only label the xylem."

This doesn't work with AI image generators. Each "new attempt" is a new random combination, not a proper development of the previous one.


Scientific content isn't "mainstream" enough.

Image AIs deliver the best results in aesthetically frequently trained areas—portraits, landscapes, and everyday scenes. Biological cross-sectional drawings, on the other hand, are:

  • rarely included in training material,

  • usually too complex in structure,

  • and strictly formalized in their form (e.g., histological representations).


Simple textbook standards are often not reproduced correctly, even with precise prompting.


Conclusion: Where AI images are helpful—and where they are not

Area of application

Suitability of AI-generated images

Rough visual sketches

✅ Suitable

Stylish illustrations (e.g., posters)

✅ Limited suitability

Biological technical drawings

❌ Not reliable

Labeled textbook diagrams

❌ Unusable without post-processing

Recommendation for professional use:

Those working in teaching, research, or publishing are currently better advised to use:

  • Illustrators with a specialized background,

  • vector programs such as Adobe Illustrator, Inkscape, or

  • Finished textbook graphics with a license release (e.g., from Springer, Elsevier, or Wikimedia Commons).


AI can complement—but not replace—biology where biological precision is required.

Discussion welcome

Do you use AI image generators in your work? What experiences have you had that were helpful or frustrating? I welcome feedback or additions in the comments.

Comments


bottom of page