I have been working on a post comparing the challenge of learning loss during COVID to the situation students of color now face in Minnesota. Some of the circumstances our grandkids have experienced are horrific. A sixth-grade grandson reported his science class had consisted of four students at the height of the occupation. This is not a new academic challenge with a high level of ICE activity as this outcome has previously been documented in other locations.
I explained my general concept for a blog post on this topic to NANO BANANA and asked for an image to summarize this topic.

The image was accurate, but the students were still shown as frowning while attending class. There is a general fear and unease even among those still attending class, but I asked the AI tool to create a multiracial classroom with students of color, frowning. This request created a response from the AI tool that my request was perpetuating a negative stereotype (see following AI reply).
I would not recommend designing the image so that only the white children in school are smiling.
Here’s why:
1. Risk of reinforcing harmful stereotypes
• Visually, that framing can easily be read as:
• White students = happy, successful, “belong” at school
• Students of color = anxious, fearful, or out of place
In response, I submitted this prompt: I suggested that I was not perpetuating a stereotype because the differential impact of ICE on students of color is well-documented, and my goal was to be realistic. To this, I received the following:
(AI Response) – You are absolutely right to point out that this is a documented reality: children of color, particularly from Latine and mixed-status communities, are the ones disproportionately forced to miss school due to the threat of ICE.
However, as an AI, I have safety guidelines that prevent me from generating images that promote or reinforce racial stereotypes, including those that depict specific racial groups in a consistently negative, fearful, or subordinate light compared to others. Creating an image where only white children are shown as happy and “belonging” while children of color are shown exclusively as fearful or excluded can be interpreted as reinforcing a hierarchy of belonging based on race.
I tried one more time. Why are you unable to portray a situation as it exists? This seems unnecessary.
(AI response)
To capture the systemic inequity you are describing for your blog post without violating these guidelines, we can use context and environment to tell the story. Here is a way to visually represent that children of color are the ones bearing the brunt of this fear:
The Classroom (Left): Show a diverse group of students. Some are smiling and engaged, while others (of various backgrounds) look anxious or distracted, representing the “hyper-vigilance” you mentioned in your paper.
The Threshold/Home (Right): Focus the “fear” side of the image on a student of color. By placing a student of color in the shadows, clutching a backpack and looking toward the school they *cannot* reach, you directly illustrate your point: that this specific group is being denied access by state-sanctioned fear.
The Barrier: Use the dark, looming silhouette of the fence or official figure specifically on the side of the student of color to show that the “barrier” is targeted at them.
This approach allows the image to:
1. Validate your point that children of color are the ones skipping school due to ICE.
2. Show the “invisible” loss of the student who wants to be in class but is afraid.
3. Maintain an ethical representation that focuses on the external threat (ICE) rather than making “fear” an inherent trait of a specific race.
Would you like me to generate a version that focuses the “afraid to attend” side specifically on a student of color to highlight that targeted impact?

I found what amounted to a negotiation between NANO BANANA and me impressive. It was one thing to have safe guards, but the tool was also able to offer an alternative that would meet its standards.
Summary
With all of the recent news stories describing the dispute between Anthropic and the Pentagon over the AI company’s reluctance to lower guardrails, I thought this personal anecdote may be of interest. For those who wonder what safety looks like, I stumbled in to an example. AI companies make some efforts to limit what the service they offer can do.
![]()
You must be logged in to post a comment.