I watched a paid social media advertising video course while developing a strategy for promoting my recent novel, Tumult in Mecca. The course recommended using stills instead of videos and generative AI to create images of situations from the book.
The course recommended testing several images, as it is impossible to determine which one will catch the most attention (the first step in the AIDA process).
So, I ventured into the mystical world of AI text prompt engineering using OpenAI’s DALL-E, to which I have a PLUS subscription.
Let me give you the main conclusions:
Text prompt engineering is time-consuming and difficult if you want a specific outcome and consistency in the people gallery over various images. It took me hours to generate the three images I needed for the first test.
Although I was particular about describing the cultural environment, the images had an American tone, which was challenging to eliminate.
Strange artefacts were always an issue and almost impossible to eliminate.
AI Slop
Facebook was the first social media platform I tested the ads on, and soon, I started getting messages saying what I did was AI slop (or I was an AI slop).
I looked up the term and found this explanation: AI slop refers to poor-quality, often unwanted, AI-generated content.
Why would anyone take the time to comment on a picture in an ad for a book? Given that Tumult in Mecca is a novel, fiction, wouldn’t AI-generated images of scenes from the narrative be precisely the right thing to do? I asked some of the commenters, and the responses I got were out of context. They came from people who, for some reason, didn’t like AI-generated stuff.
I don’t find that the images have quality issues. Quite the opposite. It is an advantage that they look artificial. Most ads are unwanted, but that is not related to AI. Being against AI-generated text and images may be a thing, but not one that I suffer from. If my potential readers do, then I’ll back off.
The picture in the middle (above) took the longest to generate, and I am still not happy with the outcome. Not that the quality is bad, but it doesn’t illustrate what I tried to describe in the text prompt. Maybe I need to elevate my prompt engineering skills, or the AI image generator has a hard time understanding instructions on the individual character. Perhaps a combination.
Generating images with several characters, that have unusual features, proved to be extremely difficult. For some reason (bias?), DALL-E struggled to generate an image of a blond person without a beard and dressed in a traditional Saudi outfit. If you describe a person as Arabic, he (it’s always a male) will have a beard.
Artefacts
Almost all the AI-generated images my text prompt produced had unwanted artefacts.
Why is there a helicopter in the image in the middle of the first strip above? I didn’t ask for it.
The image to the left is supposed to illustrate three Scandinavian looking men in a bar in Beirut airport in 1979. Look at the legs of the man in the middle. I didn’t notice the issue at first glance, but caught it as I learned that I had to check for artefacts.
DALL-E has an editing function that should allow you to change segments of the image. The outcome of an edit, however, was unpredictable. I needed at bald person in an image, but the image generator didn’t catch this request from my text prompt. Marking a face in the generated image and asking to make the head bald changed the whole setting and gave other characters beards.
Defining a setting as business, the DALL-E gave the characters three-piece suits, although that is very uncommon in Scandinavia.
Targeting and optimisation
Marketing 101 says that you need to define your target segment before crafting the message conveying the value proposition. However, identifying a target segment for my novel is difficult (using social media segmentation criteria), and the video course recommended leaving the job to Facebook. In doing so, Facebook will expose the ad to a broad variety of users and identify the characteristics of those who react. Then it will continuously adjust the algorithm, ensuring better and better conversion.
Given a number of alternatives (text and images), Facebook will test which combination has the highest impact. You can keep on testing until you have a reasonable conversion price.
The ultimate metric
My textbooks (on revenue generation and business development in the software industry) sell very well without paid advertisement, but so far, my novels do not. Can promoting a novel through paid advertisement serve the full AIDA purchase process, and how can I measure if it does?
My novels are inexpensive consumer products. The price of the electronic version is $9.99, the paperback is $14.99 and the audio version can be streamed and thus paid through the reader’s subscription, or it can be purchased for $14.99 (all prices are before VAT or sales tax). The gross margin on my printed and electronic books is around $6. On the audio versions, which are primarily streamed through subscription services, the gross margin is around $1. (Streaming services have managed to substantially decrease the content creators’ gross margin).
Finding a profitable model would require much experimentation, and handcrafting the images would increase the upfront investment and thus the risk.
The objective of my advertisement experiment was to test if it could generate profitable book sales. By using Amazon attribution tags, I could monitor the traffic coming from the Facebook ad campaigns.
So far, I have not found a profitable model. The ads that I have tested in the UK, produced 23,000 exposures and 325 clicks on the call-to-action link. From the Amazon attribution dashboard, I registered no book sales.
The questions now are: Can I, through continued experimentation, find a set of adds that converts at a price per purchase that is less than the book’s gross margin, or am I wasting my time and money?