Logo

dev-resources.site

for different kinds of informations.

My Journey into Novel Creation Using Generative AI: Day 1

Published at
12/25/2024
Categories
genai
literature
beginners
llm
Author
saugata
Categories
4 categories in total
genai
open
literature
open
beginners
open
llm
open
Author
7 person written this
saugata
open
My Journey into Novel Creation Using Generative AI: Day 1

As someone passionate about exploring the capabilities of Generative AI, I recently embarked on a project to create literature using LLMs (Large Language Models). This is my first attempt at implementing a fully automated novel-writing pipeline, and I'm thrilled to share my experience so far. Here’s how it went on Day 1.

Choosing the Tools

For this project, I decided to use Groq Client due to its LPU’s (Linear Processing Unit) incredible speed. While I could have opted for Ollama on my computer, Groq Client’s efficiency was a clear winner for this task. Additionally, I brainstormed the overall implementation strategy with Microsoft Copilot, which proved invaluable in refining my ideas.

To test my approach, I created a Jupyter Notebook and started building the foundation of the project. My long-term plan includes deploying the process with Streamlit for a more interactive experience.

Tackling the Challenges

Generating long, coherent, and effective novels is no small feat, especially given the token size limitations of LLMs. My initial attempts fell short of the desired quality, so I refined the process as follows:

1. Outlining the Novel

The first step was generating a broad outline of the novel using the LLM. This outline served as the backbone for the entire story.

2. Creating Chapters and Subplots

Next, I instructed the LLM to generate a list of chapters with detailed descriptions, including key scenes and subplots. This ensured the story had structure and direction.

3. Expanding Each Chapter

Each chapter was then developed into a comprehensive story. To maintain coherence, the LLM had access to the outline and previous chapters, ensuring continuity across the narrative.

4. Overcoming Token Limitations

Despite the automation, token limitations occasionally interrupted the flow. To address this, I implemented summarization with overlapping chunking. This technique allowed the model to work within its constraints while retaining contextual integrity.

The Results: A Promising Start

The automation worked beautifully! With just a single click, the entire story was generated as the LLMs "conversed" among themselves. The resulting novel was creative, with:

  • A compelling plot and well-executed climax and twists.
  • Vivid descriptions that set immersive scenes.

However, there were areas for improvement:

  • Rushed Pacing: While the stage-setting was effective, transitions felt abrupt as the story moved to the next plot point.
  • Lack of Emotional Depth: The narrative occasionally felt mechanical, missing the nuanced emotions that make characters and events truly resonate.

Plans for Day 2

To address these issues, I’ll focus on:

  1. Better Prompt Engineering: Crafting prompts that encourage the LLM to add more emotional depth and smooth transitions.
  2. Refining Chapters: Feeding the chapters back into the LLM for iterative enhancements, focusing on making them more emotionally engaging and less rushed.
  3. Integrating Web Search and Databases: Exploring ways to incorporate real-world data into the pipeline for creating other forms of literature.

Conclusion

This is just the beginning of my journey using LLMs for creative writing. I'm excited about the potential of this technology and eager to see where it takes me. I'll be sure to share my progress and learnings along the way.

genai Article's
30 articles in total
Favicon
A Magic Line That Cuts Your LLM Latency by >40% on Amazon Bedrock
Favicon
All Data and AI Weekly #172 for 13 January 2025
Favicon
Evolution of language models
Favicon
Spoken Language Models
Favicon
What is ollama? Is it also a LLM?
Favicon
What is Gen AI and how does it work?
Favicon
Building an Audio Conversation Bot with Twilio, FastAPI, and Google Gemini
Favicon
mkdev's top 10 GenAI gifts of 2024
Favicon
Opinions wanted: how do we identify AI misinformation?
Favicon
Tensorflow on AWS
Favicon
Party Rock Application - GenAI
Favicon
AI Basics: Understanding Artificial Intelligence and Its Everyday Applications
Favicon
Gen AI vs LLM: Understanding the Core Differences and Practical Insights
Favicon
Amazon Bedrock and its benefits in a RAG project
Favicon
My Tech Blog
Favicon
2025: When Computers Started Creating Things
Favicon
How to run Ollama on Windows using WSL
Favicon
Generative AI Cost Optimization Strategies
Favicon
Simplifying Data Extraction with OpenAI JSON Mode and JSON Schemas
Favicon
Driving business efficiency: Integrating Needle’s GenAI framework into your applications
Favicon
AI + Data Weekly 169 for 23 December 2024
Favicon
A PAGE TALKS ABOUT (TIME UNBOXED: The @reWireByAutomation Story (2024))
Favicon
AIOps : Déboguer son cluster Kubernetes en utilisant l’intelligence artificielle générative via…
Favicon
State of AI at the End of 2024
Favicon
Why Function-Calling GenAI Must Be Built by AI, Not Manually Coded
Favicon
GenAI Developer Roadmap 🚀 | Week 1, Day 1
Favicon
Transforming Enterprises with Needle: A Generative AI Framework
Favicon
The Dev Tools Evolution: LLMs, Wasm, and What's Next for 2025
Favicon
My Journey into Novel Creation Using Generative AI: Day 1
Favicon
Function calling with Google Gemini chat AI

Featured ones: