The Midjourney Bot has been trained to produce images that favor artistic color, composition, and forms. The [--stylize
, --s
] parameter influences how strongly this training is applied. Low stylization values produce images that closely match the prompt but are less artistic. High stylization values create images that are very artistic but less connected to the prompt.
The stylize parameter takes an integer value with a range of 0 through 1000. By default, Midjourney applies a stylize value of 250 to any image generated without specifying the –stylize parameter.
In this post, I will test 3 different prompts to see how the stylize command affects the images generated.
The Car
Prompt:
motion photography, formula 1 grand prix ferrari speeding through streets, downtown chicago at night, cinematic style, asymmetric composition, global illumination, shallow depth of field --ar 3:1 --seed 10 --stylize XXXX
Let me explain some of the parameters I’ve used in the prompt:
- –ar 3:1 This instructs Midjourney to generate images that have a 3 to 1 aspect ratio. Thus the images will be 3x wider than they are tall.
- –seed 10 This instructs Midjourney to start with the same seed for each image. Every image starts as a grid of random numbers. To keep these numbers from changing, you can set the ‘seed’ parameter. It doesn’t matter what number you choose as long as it stays the same across all the prompts you want to compare.
- –stylize This is the parameter will be changing to see how it affects the output image
Flower On Cliff
Prompt:
extreme close up photography, beautiful purple flower, on side of cliff overlooking ocean, cinematic style, asymmetric composition, global illumination, shallow depth of field --ar 3:1 --seed 10 --stylize XXXX
The Cat
Prompt:
extreme close up photography, maine coon cat, on white couch looking into camera, cinematic style, asymmetric composition, global illumination, shallow depth of field --ar 3:1 --seed 10 --stylize XXXX
It seems the –stylize parameter is best when set between 250-500 for most images. In the case of the cat images however, it started to have more of an affect when set between 750-1000.
Although we used the –seed parameter to freeze the features of the image, the –stylize parameter still seems to add some layer of abstractness and randomness to the final image. This is to be expected as I’m sure –stylize is another image model that was trained on smaller image details and colors. When generating an image, Midjourney most likely uses the value in the –stylize parameter to determine how much of the ‘stylize model’ output that gets blended with the base image output Midjourney generates.
You may not always need to specify the –stylize parameter but sometimes it can help add that extra ‘pop’ to an image.