The inspiration for BloomCraft came from an frustrating experience before an planned reunion with a respected teacher I hadn't seen in years. I wanted to bring the perfect bouquet to express my gratitude, but my visit to the local florist was anything but smooth. Despite allowing plenty of time, the process took far longer than expected. I struggled to communicate the exact bouquet I was visioning of, and the florist had difficulty translating my vague ideas into a concrete arrangement. The result? Well the price came out way over the my estimation for the price and worse was that I ended up late for my meeting. This frustrating experience revealed a clear need:
But thats not all. Having lived in both Asia and Europe, I’ve seen how dramatically a flower's meaning can change across cultures. A beautiful bloom in one country can carry an unintended message in another. That’s why anothe core focus of BloomCraft is cultural intelligence. I want to ensure the bouquets crafted conveys exactly what you mean, no matter where in the world you are.
Personalized flower recommendations
Input the occasion, recipient, style, or favorite colors, and suggest flowers and example arrangements
Wide range of scenarios: romantic gestures, sympathy bouquets, thank-you gifts, celebrations, and more
Visual arrangement generator
AI creates arrangement images that match the chosen occasion or setting
Users can swap flowers, adjust how full the bouquet looks, and pick backgrounds (like a wedding table, teacher’s desk, or living room)
Pricing estimation
Breaks down costs flower by flower and shows the total
Allows people to adjust the bouquet to stay within budget
Education and inspiration hub
Simple flower cards with names, meanings, care tips, and cultural context
Guides for what works best at weddings, for condolences, or as thank-yous
Explanations of what a full arrangement “says” through its symbolism
User experience flow
People can start from the middle: by occasion, by a favorite flower, or through an uploaded inspiration image
Quick edits make it easy to swap flowers or colors
Arrangements can be saved and shared with friends or a local florist
Features to potentially include
A feedback loop where users rate bouquets, which improves future suggestions
I designed the backend API and the prompt-engineering pipeline: using DeepSeek O1 to synthesize user constraints (occasion, culture, seasonality) and a Stable Diffusion fine-tuned model to generate photoreal bouquet images. Stack: Lovable (frontend rapid prototyping), Supabase (DB), custom prompt templates, and fast-dreambooth fine-tuning.
A crucial part of this website was the prompt creation process. I needed it to do three things:
Intake information and create a design according to the provided details and any constraints.
Output the design using the specified format.
Construct a formatted prompt for the generative AI.
Below, you can see the specific prompt that would be given to the DeepSeek API.
Note, however, that this version does not include step 3 yet. The reason is that I need to test the image generation AI first to determine what prompt format will work best.
Below you can see the standarized prompt that will be given to Deepseek AI to output the prompt used for image generation.
In this part, I used the fast-dreambooth training enviroment opensources by TheLastBen, which is based on the stable diffusion model. Below you can find the iteration of images generated and the image sets used:
This is the base model, hense why it has the most pictures, they are a range of 35 images that I have carefully selected. That has a similar framing of the bouquet in the frame, it is done so to avoid distraction for the diffusion model, and to let it learn parts that I want it to learn. These images cover a wide range of different colors. At the stage of this model I acutall wasn't paying too much attention to the variaty of the flowers and what flowers were appearing.
In this version of the model, I put a specific emphasis on red bouquets, sunflower-based yellow bouquets, as well as several rare color bouquets such as blue and purple. This is because I noticed that the model is particularly good at pink and white bouquets, but significantly weaker when it comes to other colors and flowers that are not commonly seen in those shades. I did this with the aim of increasing the variety of flowers and avoiding problems in the future. Additionally, I also focused on lighting and how I want the flowers to look aesthetically.
At this stage of the model, I would say I am pretty satisfied with the results. The model are consistently generating bouquets at a good quality. This means that there are significantly less distorted images and flower that were simply to aesthetically pleasing.
For the V3 and V3.5 models, these two versions were specifically targeted toward weakness of the model. Therefore my training strategie became choosing very characteristical images (eg. Red flower bouque) and increase the amount of cycles the image will run through the training. This actually enables the model to become significantly better at targeted areas.
Captions serves its literal purpose in data training, the model reads your captions with the image and learnes how the image is described. This enables the model to not only start to understand information in the image, eg what type of flower should look like what, but also enable your model to better understand the prompt that you are giving it. One of the strategies I used to train my model was to use the prompts I would use to generate that image as the caption, following the exact same format. This essentially means, if I were to type in that prompt to the maschine again, it has seen it before and can correlate it to an image it has been trained on, allowing it to better mimic that image, ultimately making the image more realistic.
This is image ghconfnu(7) and later ghconfnu_v2(10) and ghconfnu_v3(1), an image that was very representive of a red bouque and therefore used for all three versions of the model.
Below you can see that how the captions have evolved. I realized I could just use the prompt format as the caption of the image in ghconfnu_v2, while in ghconfnu_v3, I added more details to the description which enabled me to tweak the image details to my preferences.