How I Used Gen AI (ChatGPT and Claude) to Build a Spotify AI Assistant from Scratch

Mandy Nicole Hong
8 min readDec 18, 2024

--

Generative AI is transforming the way we learn and build things. As someone with a background in film, marketing, and data science, I never imagined exploring software development. Yet, with tools like ChatGPT and Claude, the idea of creating something from scratch no longer seemed impossible.

With zero experience in frontend, backend, HTML or JavaScript, I decided to test whether AI could guide me through building a functional web app. Here’s how I turned an ambitious idea into reality, the lessons I learned, and the challenges I overcame along the way.

Note: This project is open source on Github, and while I haven’t requested an extension for Spotify’s API usage, I can provide limited access to the app for a small number of testers using their Spotify accounts. If you’re interested, feel free to reach out to me via LinkedIn or email me at keqing.hong@yahoo.com!

Product Demo

https://myspotipal.com

Getting Started: Turning an Idea into Action

With no experience in frontend or backend development, I relied on AI tools to map my first steps. ChatGPT guided me in setting up a GitHub repository, creating SSH keys, and building a Python environment.

I chose Flask for its simplicity and beginner-friendly nature. My initial goal was straightforward: connect to the Spotify API and fetch data like top artists and recently played tracks. By following Spotify’s developer documentation, I set up an app, generated client credentials, and implemented authentication. With ChatGPT’s help, I connected to the Spotify API and built app routes to display this data in an HTML file.

Integrating OpenAI and Creating a Chatbot

After establishing the Spotify API connection, I integrated OpenAI’s API to create chatbot functionality. My initial vision included:

  • Logging in with Spotify credentials
  • Retrieving data from Spotify API endpoints to answer queries.
  • Searching for Spotify items and enriching responses with external knowledge.
  • Generating playlists based on mood, length, or specific examples such as an artist, track, or genre (though Spotify’s recent deprecation of key recommendation endpoints has made this feature more complex and less reliable)

Hosting My Website Through AWS

Architecture:

Architecture diagram generated using Claude.ai/ChatGPT & Mermaid.js

When I first started, the idea of website hosting was completely foreign to me. My previous experience was limited to building a portfolio on Squarespace — hardly comparable to deploying a functional application!

ChatGPT guided me step by step, helping me deploy my app on an EC2 instance with Ubuntu, Gunicorn, and Nginx. I automated deployments and releases with GitHub Actions, making updates seamless.

Deployment challenges taught me valuable lessons. My app’s bugs required extensive refactoring, especially for logging. ChatGPT helped me set up an SQLite-based logging system, replacing cluttered print statements with a structured solution that stored logs for debugging and insights. This cleanup enhanced my code and allowed me to leverage SQL for analysis.

Addressing Technical Hurdles Along the Way

1. Refactoring: Cleaning the Codebase

Early on, I made the BIG mistake of copying and pasting AI-generated code without much thought. This resulted in a messy codebase where helper functions were tangled with clients and API calls, making debugging nearly impossible. When my boyfriend reviewed my codebase, he lovingly called it “spaghetti.”
Lessons Learned: Separation of Concerns
Determined to improve, I restructured the project by introducing modular components. This meant:

  • Splitting responsibilities: I created separate files for Spotify and OpenAI API clients, ensuring each handled only their respective tasks, like authentication and API queries.
  • Organizing helper functions: Reusable utility functions were moved into a dedicated file, making them easy to find and maintain.
  • Using classes: Major components were refactored into classes, grouping related data and methods together, which made the code cleaner and more intuitive.

This modular approach transformed my codebase. It became more readable, scalable, and, most importantly, easier to debug and extend as the project grew.

2. Function Calling and Modern Implementations

When querying Spotify data dynamically, I quickly realized that relying on GPT’s memory to handle large datasets was impractical. This limitation led me to explore function calling — a powerful feature that allows a chatbot to trigger specific functions based on user intent.

What is Function Calling?
Function calling enables a chatbot to understand a user’s request and determine which specific function (or piece of code) to execute to provide the right response. Think of it as the chatbot picking the right tool from a toolbox to solve a problem. Function calling also enables LLMs to have structured output instead of just natural language.

Function calling became available in GPT models only in 2023. Because ChatGPT was trained on data from before this feature existed, it couldn’t automatically generate best-practice code for its implementation. Initially, the solutions it provided were overly simplistic and outdated, requiring me to manually classify user queries and build an unnecessarily complex setup with separate prompts for every query type.

After consulting OpenAI’s documentation, I realized I had overcomplicated the implementation. The documentation offered examples of best practices for using function calling more effectively. I used these examples to craft structured prompts and provided them to Claude, which I found had superior coding capabilities compared to ChatGPT.

With Claude’s help, I built a much cleaner and more efficient automatic setup. The new system dynamically interprets user queries, triggers the correct Spotify API functions, and streamlines the entire interaction process. This process didn’t just improve functionality — it made my codebase much, much simpler.

Lessons Learned: Always refer to official documentations to stay updated on the best practices.

3. Implementing Observability and Tracing Telemetry

Implementing observability and tracing telemetry is crucial for understanding how your AI models make decisions and for diagnosing unexpected behaviors. To achieve this, I implemented traceability for my AI by integrating OpenLLMetry, an open-source project developed by Traceloop that extends OpenTelemetry to provide comprehensive observability for Large Language Model (LLM) applications.

What is Traceability?
Traceability refers to the ability to track and monitor the flow of data and decisions through a system, allowing you to understand the “why” and “how” behind each output. Specifically, LLM tracing is the practice of tracking and understanding the step-by-step decision-making and thought processes within LLMs as they generate responses. This involves capturing detailed metadata about inputs, outputs, intermediate reasoning steps, and API calls, which is essential for auditing, debugging, and refining AI applications.

Lessons Learned: Unlike traditional programs, which follow predictable rules, LLMs work differently — they rely on probabilities and patterns from their training data. This makes them unpredictable at times, meaning traditional debugging tools don’t always work. Instead of stepping through code to find an issue, we need new methods like traceability to understand how and why these models make decisions. By tracking their step-by-step reasoning, we can uncover unexpected behaviors and improve their performance.

Using ChatGPT and Claude as Coding Partners

Strengths of Generative AI:

  • ChatGPT: Excels at outlining steps and generating boilerplate code, making it ideal for initiating projects or tasks. Now it can also organize chats based on projects referencing files like Claude.
  • Claude: Particularly strong at debugging and organizing existing projects by referencing multiple files within a project folder. It can also render HTML files or graphs, making it easier to visualize outputs before making edits.

Limitations of Generative AI:

  • Overengineering: AI often proposes overly complex solutions that require careful review and simplification.
  • Overly Agreeable: AI may affirm flawed suggestions without critical evaluation, making validation essential.
  • Hallucination: When coding, AI can hallucinate by generating outputs that are inaccurate or fabricated. This often includes introducing irrelevant libraries or dependencies, adding unnecessary complexity to your project. Additionally, tools may rename functions or change variable references without notice, leading to subtle but critical errors.
  • Outdated or Low-Quality Coding Practices: AI may produce outdated or low-quality code, especially if it is fed with poorly written examples. Some solutions might also rely on obsolete coding methods, which may not align with modern standards.
  • Conversation Limits: Tools like Claude and advanced OpenAI models (e.g., OpenAI o1) can hit practical conversation limits easily.

Best Practices for Working with AI:

1. Prioritize Simplicity:

  • AI-generated solutions can be overly complex. Question the necessity of every feature or dependency it suggests.
  • Review and refactor AI-generated code to eliminate redundancy and unnecessary boilerplate.

2. Start Fresh When Needed:

  • If responses become inconsistent or unhelpful, start a new chat to reset context and improve output.

3. Own Your Project:

  • AI is only as effective as the person using it. Take ownership of the code’s architecture, ensuring it aligns with your project’s goals and standards.
  • Use AI as a collaborator, not a substitute for thoughtful engineering.

4. Take a Deep Breath:

  • AI can sometimes leap to conclusions without reasoning step by step. When this happens, calmly remind it to take a deep breath and reason carefully. You’d be surprised how much it helps to ask, “Hey AI, slow down and think again. ”
  • But as an engineer, you should take a deep breath too. I get it, there are moments when I’m ready to throw my laptop out of the window when ChatGPT gives me yet another ridiculous response, but then I remind myself 2 things: 1) MacBooks cost a fortune. 2) Frustration is counterproductive — curiosity and patience are far more effective in solving problems.

5. Stay Updated:

  • Technologies like function calling for LLMs evolve rapidly. Regularly update your knowledge by reviewing documentation and staying informed about new best practices.

By embracing these strategies, you can harness the strengths of generative AI while mitigating its limitations, creating a more efficient and empowering coding experience.

Final Thoughts

Building a Spotify AI Assistant with no prior web development experience was an eye-opening journey. Generative AI tools like ChatGPT and Claude served as invaluable coding partners, but this project also showed the importance of critical thinking, organization, and staying updated with best practices.

Generative AI isn’t autopilot — it’s co-pilot. While it can assist and accelerate progress, it requires guidance, validation, and a willingness to learn. Throughout this process, I’ve realized that AI is far from replacing human intelligence. Instead, it’s a powerful tool that, when used thoughtfully, can open doors to possibilities that might otherwise seem out of reach.

Whether you’re a seasoned developer or just starting out, the combination of generative AI and a curious mindset can transform even the most intimidating projects into achievable milestones.

What’s next for MySpotipal? I’m excited to tackle music recommendation generation without relying on Spotify’s deprecated endpoints and to explore more advanced use cases for AI-driven music curation. The journey is far from over, and the possibilities continue to grow.

This project is available on GitHub. If you’re interested, I’d be happy to provide access for you to test the app using your Spotify account. And if you have any thoughts, ideas, or feedback, let’s collaborate — I’d love to hear from you! Connect with me on Linkedin.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Mandy Nicole Hong
Mandy Nicole Hong

Written by Mandy Nicole Hong

👩🏻‍💻 Marketing Data Scientist | Diver Exploring Depths Beyond Data 🌊 | Lifelong Learner, Adventurer & Creator 🌟

No responses yet

Write a response