Skip to content

AI-powered multi-modal storytelling project generating story text, cover images, and PDF storybooks using Hugging Face API and Stable Diffusion

Notifications You must be signed in to change notification settings

imhnor/StoryGenAI-TaleWeaver

Repository files navigation

TaleWeaver-StoryGen

Overview

TaleWeaver-StoryGen is an AI-powered storytelling project that generates stories with a unique title, continuous narrative, and moral lesson.
It is a multi-modal (text + image) AI project, meaning it produces story text and a cover image for the story.

This project demonstrates:

  • Generative AI concepts
  • API usage
  • Running models locally via pipelines
  • Integrating text and image generation into one workflow

Multi-Modal AI / Text & Image Integration

  • Text Generation: Uses mistralai/Mistral-7B-Instruct-v0.2 via Hugging Face API to generate story text.
  • Image Generation: Uses Stable Diffusion pipeline to create a cover image based on the story title.
  • PDF Creation: Combines generated text and cover image into a professional storybook PDF.

This demonstrates how AI can handle multiple modalities (text + image) simultaneously.


Features

  • Generates stories based on topic, genre, and style.
  • Parses stories into Title, Story, and Moral.
  • Generates a cover image automatically.
  • Creates a PDF storybook with cover, formatted paragraphs, and moral.
  • Uses Hugging Face API for text generation.
  • Uses local Stable Diffusion pipeline for image generation.
  • Supports GPU acceleration for fast local image inference.

Usage

  1. Set story parameters:
topic = "The Lion and the Crystal Lake"
genre = "Fantasy"
style = "Magical and Heart-touching"
length_pages = 3
  1. Run the notebook (TaleWeaver.ipynb) or script.

Installation

  1. Clone the repository
git clone https://github.com/imhnor/TaleWeaver-StoryGen.git
cd TaleWeaver-StoryGen

Hugging Face API Token

To use hosted text generation models:

  1. Create an account on Hugging Face.
  2. Go to Settings → Access Tokens → New Token.
  3. Copy the token.
  4. Set it in your code:
import os
os.environ["HF_TOKEN"] = "your_huggingface_token_here"

Or directly in the client:

from huggingface_hub import InferenceClient
client = InferenceClient(token="your_huggingface_token_here")

The token is required to generate text from Hugging Face hosted models.


Learning Purpose / What You Learn

This project is designed to help understand how Generative AI works:

  • Prompt Design: Craft prompts for structured story generation.
  • Multi-Modal AI: Learn to integrate text and image generation.
  • API & Local Models: Use Hugging Face API and local pipelines together.
  • Content Processing: Convert AI outputs into professional PDFs.
  • Creative Experimentation: Understand AI storytelling and explore model behavior.

References

About

AI-powered multi-modal storytelling project generating story text, cover images, and PDF storybooks using Hugging Face API and Stable Diffusion

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published