Home » 2025 » May

Monthly Archives: May 2025

How to Make Trileqe: An Accessible Guide for Home Cooks

Technical Description by Elena Pjetergjokaj

LINK TO DOCUMENT

https://ccnymailcuny-my.sharepoint.com/:w:/g/personal/epjeter000_citymail_cuny_edu/EQvEvyi1QiBHgyRIjrViahABl5HLaGQi9MuMoACWkWnZOw?e=8yueHV

Rhetorical Situation:
Dear Classmates and Instructor,
For this technical description, I chose to describe the process of making a traditional Albanian dessert called Trileqe. The purpose is to break down the process of making this sweet treat in a way that is easy to follow, while also explaining the ingredients, tools, and science behind it.

The context of this description is an educational setting, where clear, step-by-step instructions are essential for helping readers understand a process they may be unfamiliar with. I imagine my work being published in a cookbook or a blog for an international audience seeking simplified traditional recipes. The intended audience includes individuals with little to no experience with Albanian cuisine—home cooks or culinary enthusiasts interested in exploring international desserts. My goal is to make the recipe approachable for anyone who wants to try making this unique dessert and help guide them through each step so they can achieve the perfect result. Through this, I hope to share the joy of Albanian culture and cuisine in a way that is accessible and enjoyable for all.

Introduction:
Trileqe is a popular Albanian dessert that is often mistaken as Turkish due to its widespread popularity in the region. This recipe is based on the famous Mexican Tres Leches, which features a very light sponge cake soaked in three kinds of milk: evaporated milk, condensed milk, and double cream. This version is topped with whipped cream and dark caramel (Safira, 2020).

If we take a deeper look into the history of this dessert, its origin can be traced back to Latin America, where it gained popularity around the 20th century. It likely reached the Balkans through cultural exchanges, trade, and immigration, eventually becoming a staple in many Albanian households. The Albanian version keeps the same core concept of soaking the cake in a three-milk mixture. The main difference is that our cake is denser and topped with a liquid caramel glaze. The caramel adds a unique flavor, which makes it a favorite in Albanian kitchens.

Overview:

Trileqe is typically rectangular, usually baked in a 9×13-inch pan. The cake itself is pale golden, while the milk mixture creates a moist, white interior when sliced. The caramel topping varies in color from light amber to deep brown, depending on how long it’s cooked.

Ingredients:

Sponge Cake:
– 6 large eggs
– 7 tablespoons granulated sugar (4 for egg whites, 3 for yolks)
– 6 tablespoons flour
– 1 teaspoon baking powder
– 1 teaspoon vanilla extract

Milk Mixture:
– 1 cup whole milk
– 1 cup sweetened condensed milk
– 1 cup evaporated milk

Caramel Topping:
– 1 cup granulated sugar
– 2 tablespoons butter
– 1/2 cup heavy cream


Steps & Explanations:


Sponge Cake
– A light, airy cake made with eggs, sugar, and flour.

  1. Milk Mixture – A blend of three different milks, giving the dessert its signature texture.
  1. Caramel Topping – A thick, sweet glaze that adds depth to the dish.

Components, Explanations, and Visuals

1. Sponge Cake

The cake is made with eggs, sugar, flour, baking powder, and vanilla extract.

  • Start with six large eggs. Separate the egg whites from the yolks into two mixing bowls.
  • Using an electric mixer, beat the egg whites until they’re fluffy, gradually adding four tablespoons of sugar.
  • You’ll know the eggs are ready when you can flip the bowl upside down and the mixture doesn’t fall out.

Once the egg whites are ready, set them aside.

  • Add three tablespoons of sugar to the egg yolks and mix until fluffy.
  • After that, combine the yolks and whites. Use a silicone spatula instead of an electric mixer to fold them together gently—this keeps the mixture airy and gives the cake its signature spongy texture.

Now it’s time to add the flour, vanilla, and baking powder:

  • In a separate bowl, mix six tablespoons of flour and one teaspoon of baking powder, then sift them to avoid clumps.
  • Add one teaspoon of vanilla extract to the egg mixture and gently fold it in.
  • Slowly incorporate the sifted flour mixture, combining with one hand and mixing with the other to avoid any lumps.

Next, prepare your baking pan:

  • Spray it with non-stick spray or line it with parchment paper to ensure the cake doesn’t stick.
  • Pour in the batter and let it rest while you preheat the oven.

Set the oven to 350°F (180°C) and bake the cake for 20–30 minutes. Since ovens vary, use a fork to test doneness—if it comes out clean, the cake is ready.


2. Milk Mixture

Now that the cake is done, it’s time for the milk soak. It’s easier than you might think! You’ll need:

  • 1 cup of whole milk – provides fat and creaminess.
  • 1 cup of sweetened condensed milk – adds richness and sweetness without making the cake soggy.
  • 1 cup of evaporated milk – helps balance the consistency and keeps the cake from being too heavy.

Mix all three together until fully combined.

Before pouring the milk over the cake:

  • Use a fork to poke holes throughout the cake so it absorbs the milk evenly.
  • Slowly pour the milk mixture over the cake and let it soak it all in.

3. Caramel Topping

You have two options here:

To make it from scratch:


– Melt the sugar in a saucepan over medium heat until golden brown.
– Carefully add the butter and heavy cream, stirring until smooth. Be cautious—the mixture will bubble.
– Let the caramel cool slightly, then pour evenly over the cake.
– Use 1/2 to 3/4 cup caramel, depending on sweetness preference

Or, save time like I do and buy ready-made caramel (often labeled krem caramel) from a Balkan/Albanian grocery store.

Pour the caramel over the soaked cake and let it set slightly before serving.


Final Steps:

Pour the caramel 1–2 hours before serving, then refrigerate to allow it to set. It can be added earlier but should be chilled before slicing.
 For the feathering design: Drizzle a few lines of heavy cream over the caramel and drag a toothpick through the lines to create a marbled effect

Cut the cake into equal squares.

Refrigerate it for 3–4 hours or overnight for best results. This gives the milk time to absorb fully and enhances the flavor.


Conclusion:
Trileqe is a dessert that strikes the perfect balance between simplicity and indulgence. The light, airy sponge cake paired with a rich milk soak and caramel glaze creates a delightful mix of textures and flavors. The moist cake and creamy milk soak make each bite sweet and rich, while the caramel adds elegance and depth.

For those making Trileqe for the first time, remember to let the cake chill overnight to allow full absorption of the milk. A common mistake is overbaking the sponge, which can result in a dry cake even with the milk. Additionally, experimenting with caramel textures lets you customize sweetness to your liking.

This recipe beautifully fuses Latin American and Balkan traditions into one delicious dessert. Whether you’re an experienced baker or a beginner, Trileqe offers a fun challenge that rewards patience and precision. The final result is a dessert worthy of any occasion.

References:

Safira. (2020, October 27). Trilece. Tiffin and Tea. https://tiffinandteaofficial.com/trilece/

Who Gets to Be What?

Who gets to be what?

A gender, race, and age Analysis of AI-Generated Images.

Elena Pjetergjokaj, Sadia Zabin, Samir Mazumder

City College Of New York

21007 Writing for Engineering

Professor India Choquette

Date 05/14/2025

LINK TO DOCUMENT

https://ccnymailcuny-my.sharepoint.com/:w:/g/personal/epjeter000_citymail_cuny_edu/EXWj-kcqIqpIp6Xvs5SSUsEBgdKh-Akly4V5le5uvNk8NQ?e=Gn6rEK

Abstract

This lab report investigates the presence of age, race, and gender bias in AI-generated images, focusing on three distinct professions: pharmacist, babysitter, and mechanical engineer. Research has shown that AI image generators do not produce neutral representations but instead overrepresent certain groups while actively underrepresenting others. These patterns may reflect underlying societal stereotypes that are embedded in the data used to train these models.
Using Rabbithole, we generated 100 images of each profession and categorized them based on age, race, and gender. Our goal was not to measure accuracy against real-world demographic data, but rather to identify recurring patterns and visual stereotypes that suggest built-in bias in how AI “imagines” different roles. Our findings showed that these systems frequently reinforce narrow, often outdated, portrayals of people in professional settings, supporting the idea that the biases within their training data shape image generators.

Introduction:

Artificial intelligence image generators, like AI itself, are becoming increasingly powerful tools capable of generating realistic visuals from simple prompts like “pharmacist,” “babysitter,” or “mechanical engineer.” However, the more widely these tools are adopted, the more pressing it becomes to address the biases present in their outputs. Bias in AI-generated images can manifest in several ways, including underrepresentation of certain groups or stereotypical portrayals based on race, gender, or age. These outcomes are rooted in the training data itself—data that often lacks diversity or reflects existing social inequalities. As a result, AI systems may unintentionally reinforce stereotypes even when marketed as neutral or objective.

Yiran Yang (2025), writing for AI & Society, explores racial bias in AI-generated images and provides compelling visual examples of how image generation systems often reinforce narrow portrayals of cultural and racial identity. Yang critiques how AI tools disproportionately favor White or East Asian features, marginalizing others. Her perspective is especially relevant to our lab, as she investigates how AI reflects not only technical limitations, but also the cultural frameworks embedded in the data. Like Yang, we analyze AI-generated portraits for patterns of bias in how professions are visualized and explore how the outputs align with deeply rooted social narratives.

While Yang focuses on creative image generation, we also drew from research in more technical, high-stakes domains. A peer-reviewed article by Yetisgen-Yildiz and Yetisgen (2024), published in Diagnostic and Interventional Radiology, examines how AI is used in medical imaging and shows that bias in training data can lead to less accurate diagnostic results for underrepresented groups. This underscores a broader truth: whether in healthcare or image generation, the data used to train AI determines who gets represented—and how. Their work reinforces the importance of carefully selecting diverse and inclusive datasets to ensure fairness and accuracy in AI outputs.

Lastly, a 2024 study in the Journal of Family Medicine and Primary Care analyzed AI-generated images of surgeons and found a significant underrepresentation of women and Black individuals when prompted with titles like “microsurgeon” or “plastic surgeon.” Although the study focuses on one AI system, it reveals how even single prompts can expose structural patterns of underrepresentation. These findings align with our project’s purpose: to show that biases in AI outputs are not random—they reflect systematic trends in how professions are imagined by the models generating them.

Together, these sources form a solid foundation for our lab report. By analyzing the visual representation of race, gender, and age in AI-generated images, our experiment highlights how bias emerges through recurring patterns and narrow portrayals. Rather than measuring statistical accuracy, we focus on the ways AI reflects and perpetuates social stereotypes, making this a crucial issue for developers, researchers, and everyday users alike.

Hypothesis:

AI image generators do not produce neutral or diverse portrayals. Instead, they overrepresent certain groups while underrepresenting others, reinforcing visual stereotypes based on race, age, and gender.

Materials and Methods

Materials: 

  • AI image generator: RABBIT HOLE 
  • Prompt list: “Pharmacist,” “Babysitter,” “Mechanical Engineer” 
  • Spreadsheet or data collection software 
  • Data visualization tools (e.g., Google Sheets, Excel, Canva) 

Methodology: 

  1. We selected three professions that vary across stereotypes in gender, age, and race. 
  2. For each profession, we prompted the AI generator using only the job title (e.g., “pharmacist”) 20 times to gather a diverse sample of images. 
  3. Each group member analyzed the images based on three criteria: race, gender, and age group (child, young adult, middle-aged, senior). 
  4. We recorded the perceived demographics in a spreadsheet
  5. We also documented example images that clearly represented bias or over/underrepresentation. 
  6. Finally, we used graphs and charts to visualize trends and determine whether the images aligned with real-world demographics. 

Results

For each profession (pharmacist, babysitter, and mechanical engineer), we generated 100 images using the AI platform Rabbithole, and then we categorized them by perceived age, gender, race, and whether the image appeared real or animated. Instead of comparing these results to real-world demographics, we focused on what the AI prioritized or omitted. The goal is to identify recurring patterns and stereotypes that suggest built-in bias.

Pharmacist

The pharmacist images revealed a clear gender and racial bias. 82% of the figures were male, and most appeared to be between 30 and 50 years old. White and East Asian features were overrepresented, while darker skin tones and women were noticeably underrepresented. This suggests the AI draws from a limited mental model of what a “pharmacist” looks like—favoring serious, older male professionals in medical-style clothing.

Babysitter

The AI overwhelmingly associated babysitters with young, light-skinned women. Over 90% of the images were female, and almost all were aged 20–30. The few male-presenting figures were blurry, distorted, or cartoonish. Additionally, more than 80% of the images were animated. Interestingly, most images showed messy rooms with toys and clutter—even though the prompt did not mention environment—implying the AI has internalized a stereotype that caregiving is chaotic and feminine.

                              .                       

Mechanical Engineer

Mechanical engineer outputs skewed heavily toward white, male figures aged 30–50. Around 70% of the images were male, and over 60% presented white individuals. There was limited representation of women or racial diversity, and about 20–25% of the images were not people at all, but robots or abstract, mechanical forms. This suggests the AI struggles to break from stereotypical associations between masculinity, machinery, and engineering roles.

Across all 12 charts, the AI reinforced a consistent visual narrative:

  • White men dominate high-skill professions.
  • Women are mainly placed in nurturing, domestic roles.
  • Youth and light skin are prioritized.
  • Diversity is limited or aestheticized, not central.

These results confirm your hypothesis—not by comparing to national statistics, but by exposing the repetitive, biased ways the AI assigns identity through images. If left uncorrected, this technology risks further cementing harmful assumptions about who belongs in what role.

Discussion:

Our results show that AI image generators often fail to present a diverse and balanced view of professional roles. Instead of challenging existing social norms, they reinforce narrow stereotypes. For instance, most of the babysitter images depicted young, light-skinned women, while mechanical engineers were predominantly white men in their 30s to 50s. These recurring patterns suggest that the AI is not drawing from a wide or neutral dataset, but from training inputs steeped in traditional assumptions about gender, race, and age.

Across all three professions—pharmacist, babysitter, and mechanical engineer—there were clear patterns of bias. Men dominated in both pharmacist and engineering roles, with pharmacists being 82% male and engineers 70% male in our generated set. Women were consistently underrepresented, particularly in technical fields. In contrast, the babysitter role was almost exclusively assigned to women, showing that caregiving is still visually coded as a “feminine” job by the AI. Even more telling was how men, when placed in non-traditional roles like babysitting, were distorted or animated—often looking unrealistic or cartoonish. This distortion may reflect a lack of training data representing men in such roles.

Race bias was also consistent. White and East Asian features were the most common across all roles, while other racial and ethnic groups—Black, Hispanic, South Asian, Middle Eastern—were rare. There was little to no representation of multiracial individuals, and visual markers of racial diversity were often subtle or ambiguous. This suggests the AI model favors “default” appearances that align with socially dominant racial imagery.

The Real vs. Animated distinction offered further insight. Serious roles like pharmacist and engineer were more often portrayed with realistic visuals, while babysitters were overwhelmingly animated. This not only infantilized the caregiving role but also made it seem less professional or credible. It’s concerning that realism is selectively applied based on stereotypical perceptions of authority or skill.

Another surprising discovery was the presence of non-human figures—robots or abstract forms—especially in the engineering images. Roughly a quarter of the mechanical engineer images showed something other than a human being. This may point to a deeper association the AI has between technical jobs and mechanization, but it also reveals how the system struggles to represent diversity in these roles, defaulting to inhuman or ambiguous visuals instead.

Age bias also stood out. Middle-aged adults (30–50) dominated the pharmacist and engineer visuals, while babysitters were mostly in their 20s. Seniors and children—who exist in these fields in real life—were virtually erased. This “invisible aging” pattern in AI mirrors how older adults are often sidelined in media and tech, despite their presence in the workforce.

One of the most concerning trends was the complete absence of people with disabilities or visible physical differences. This erasure reflects a broader issue with AI-generated images: they promote a flattened, idealized version of humanity that excludes non-normative bodies. If people begin using these visuals in educational tools, media, or marketing, it could reinforce the harmful idea that only a narrow type of person “fits” a role.

Together, these patterns confirm our hypothesis: AI generators do not create unbiased portraits of professionals. Instead, they follow the same narrow scripts that have long been embedded in media, education, and popular culture. If we hadn’t seen such consistent results, we would’ve tested the prompt with other platforms or expanded our categories. But the patterns were clear enough that further testing wasn’t necessary. This experience shows the urgent need to retrain AI systems with broader datasets and implement regular bias checks. Without that, AI tools will continue to reflect and even amplify the inequalities we’re trying to move past.

 Conclusion (1–2 paragraphs)

The goal of this lab report was to explore whether AI image generators reinforce bias when visualizing people in professional roles. We focused on three contrasting jobs: pharmacist, babysitter, and mechanical engineer. From the start, we hypothesized that the AI would follow visual stereotypes—such as portraying engineers as men and babysitters as young women. After generating and analyzing 100 images per profession, our hypothesis was confirmed.

The AI consistently repeated stereotypical associations: white men in high-skill professions, women in caregiving roles, youth over age diversity, and animation for domestic or nurturing jobs. Individuals who did not match these patterns—like male babysitters or people with darker skin tones—were rarely seen or visually distorted. No visible disabilities were shown. These results matter because as AI becomes more widely used, its visual biases can shape public perception and influence who people imagine in various roles. To make AI-generated images more fair, inclusive, and useful, developers must prioritize diverse training data and bias auditing practices. Without these steps, AI will continue to reflect the same limits and prejudices that exist in our world.

References 

Yang, Y. (2025). Racial bias in AI-generated images. AI & Society, 40(2), 123–135.

https://doi.org/10.1007/s00146-025-02282-1

Yetisgen-Yildiz, A., & Yetisgen, M. (2024). Bias in artificial intelligence for medical imaging: Fundamentals, detection, avoidance, mitigation, challenges, ethics, and prospects. Diagnostic and Interventional Radiology, 31(2), 75–84. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11880872/ 

GO TO LIBRARY TO GENERATE Bias in AI-generated imagery of surgeons: An evaluation of DALL-E 3 outputs and demographic representation. (2024). Journal of Family Medicine and Primary Care. https://www.sciencedirect.com/science/article/pii/S0974322724005581 

Self-Assessment Essay

Self-Assessment Essay

By Elena Pjetergjokaj

City College Of New York

21007 Writing for Engineering

Professor India Choquette

Date 05/14/2025

My name is Elena Pjetergjokaj, and I’m currently majoring in Civil Engineering & Jewish Studies at City College of New York. To reflect on this writing course, I would have to start from the beginning. When I first enrolled here at CCNY, just like anyone else I had required classes that I needed to finish before jumping into my major classes, one of them being Writing for Engineering. At first, I was a bit skeptical not only because why would engineers need to write essays etc., but also because I am more confident in my math skills, rather than my writing. As the weeks progressed so did I. With the help of this course and our amazing professor India Choquette, I found myself becoming more confident in drafting, revising, and presenting my ideas clearly. This course challenged me in various ways, but it also provided valuable tools to strengthen my writing abilities, especially when it came to expressing my perspective and analyzing complex topics and not only that but I have also become better at other things like using the library resources, articulating my stance through writing, drafting and revising strategies and understanding the role of AI in writting.

First, one thing that sounds so basic but can be tricky is using the library resources and online databases. Who knew finding reliable sources was that hard/annoying. Practicing using our library resources, internet and many online databases in this class to find a good peer reviewed article for our Lab report taught me how to critically evaluate sources and integrate them into my work in a way that strengthens my arguments.

Another thing I feel like I got better at is articulating my stance through writing. When writing, sometimes it’s easy to digress and not stick to one opinion, especially when papers are too long. Our Lab Report on AI- image generators challenged me to present a clear argument that’s based on data that we analyzed. Finding a balance between technical details and interpreting my findings was a bit challenging, but through drafting and feedback, everything improved over time. Through drafting and feedback, I learned to develop a more assertive tone while maintaining academic rigor. I focused on presenting my stance upfront and supporting it with credible evidence, which made my argument more convincing. Looking back on this objective, I believe I successfully achieved it. I am now more comfortable articulating my viewpoint and backing it up with research, which will be beneficial in future writing.

Now, what helped me get to where I am right now is developing good drafting and revising strategies. The objective I focused on was to enhance strategies for reading, drafting, revising, editing, and self-assessment. At the start of the semester, I could say that I didn’t necessarily struggle, but also wasn’t the best at drafting coherent essays and felt very overwhelmed when I had to revise multiple drafts. However, the technical description assignment helped me a lot. Through it, I learned how to break down the writing process into manageable steps, starting with brainstorming, researching, and then outlining before jumping into the first draft. Peer review sessions were very helpful, too, since I received constructive feedback that pushed me to rethink my approach to explaining technical details. Like this, revising my work became less intimidating as I learned to focus on clarity and precision. I also feel like I achieved this objective, too, since my ability to revise and improve drafts significantly improved throughout the course. I also became more open to critique, understanding that feedback is a valuable part of the writing process. This growth has made me more confident in my ability to draft, edit, and produce clear and structured written pieces.

And finally, the objective that surprised me the most was “Understand both the limitations and strengths of AI (including ChatGPT and image generators) about our written work.” I wasn’t expecting that objective, especially in a college writing class, since traditionally, professors are against AI. But as time passed, I understood why that objective was there. Throughout the semester, I experimented with using AI tools to generate drafts and brainstorm ideas. While helpful, I also recognized that AI could sometimes produce generic or unoriginal content. This experience made me more critical of my writing, as I learned to use AI as a supplementary tool rather than a primary source of creativity. I feel that I have achieved this objective because I realized that AI is not the ‘enemy,’ but simply a tool that can help enhance my work. Now, I maintain a balance between utilizing AI for efficiency and preserving my unique voice in my writing. I now understand the importance of using AI tools responsibly and thoughtfully.

To conclude this paper, it’s safe to say that now I feel more prepared for my upcoming courses. This class not only improved my writing skills but also gave me the confidence to tackle complex assignments. I am excited to continue applying what I have learned, especially when it comes to articulating my ideas clearly, conducting thorough research, and making thoughtful use of AI tools. I know that the skills I developed in this class will serve me well in my future academic and professional pursuits.