Nov 24

Building and Tracking AI Agents with LangChain and LangSmith

In this video, you'll learn how to build a powerful AI agent using LangChain and integrate it with tools like Tavili Search and LangSmith for task execution and tracking.

You'll explore how to:

  • Set up your environment for LangChain by creating a virtual environment, installing dependencies, and configuring API keys for Tavili Search and OpenAI.
  • Implement search functionality using Tavili Search, retrieving real-time weather data for Tokyo, and connecting this data to an OpenAI agent powered by the GPT-4o-mini model.
  • Create an agent executor with LangChain’s createReactAgent method to enable seamless execution of predefined methods, such as querying the weather and customizing responses, including creative outputs like rhymed weather reports.
  • Integrate LangSmith to trace and analyze agent activities, tracking all method calls, parameters, and outputs for enhanced transparency and debugging.
  • Extend functionality with custom tools, such as a multiplication method, and verify their execution through LangSmith’s detailed tracking interface.

By the end, you’ll have the skills to build, customize, and monitor AI agents capable of executing complex tasks with LangChain and LangSmith, giving you both flexibility and full visibility into your agent's operations.

Full Video & Source Code
 

Nov 24

Magentic One: Microsoft’s Revolutionary Multi-Agent AI System

In this video, you'll explore Magentic One, Microsoft’s groundbreaking multi-agent AI system designed to handle complex, multi-step tasks with precision and efficiency. Think of it as a team of AI experts working together seamlessly to simplify your workload.

You'll learn how to:

  • Understand the architecture of Magentic One, starting with the orchestrator that breaks down tasks and assigns them to specialized agents like:
    • FileSurfer: Extracts and processes data from files.
    • Coder: Analyzes, writes, and refines code.
    • Computer Terminal: Executes tasks in a controlled environment.
    • WebSurfer: Interacts with web pages to retrieve and process information.
  • Explore how the orchestrator dynamically manages tasks through iterative updates, monitoring progress with a task ledger and adjusting the plan if the system encounters bottlenecks, ensuring efficient and effective task completion.
  • Set up Magentic One by cloning the repository from GitHub, installing dependencies, and customizing task execution with options like visual tracking via screenshots or human-in-the-loop (HITL) supervision.
  • Test Magentic One’s capabilities with a practical example, where agents collaborate to research AI trends in Germany by analyzing web content, government initiatives, and industry statistics.

By the end, you’ll see why Magentic One is a game-changer in multi-agent AI, enabling dynamic and automated solutions for complex, open-ended problems while giving you the tools to customize and track its progress with precision.

Full Video & Source Code
 

Nov 24

GROK 2: The Power—and Danger—of Uncensored AI

In this video, you'll dive into GROK2, a highly flexible and boundary-pushing language model known for its relaxed content policies, giving it capabilities beyond the usual limitations of mainstream AI.

You'll learn how to:

  • Analyze GROK2’s performance across competitive benchmarks, seeing where it excels in scientific QA (GPQA) and code-generation tasks (HumanEval) compared to models like GPT-4 Turbo and Claude 3.5.
  • Set up and experiment with GROK2’s API by signing up, receiving free testing credits, and configuring integrations in Python for a seamless development experience.
  • Build and deploy an application with GROK2, creating an interactive chat interface in Streamlit to view its diverse outputs directly in a user-friendly format.
  • Customize GROK2’s responses to generate content tailored to various applications, from social media posts to more technical interactions.

Most notably, you’ll see why GROK2 stands out for its uncensored approach, allowing for outputs that other models might block. This video not only shows how to harness GROK2's powerful and adaptable features but also underscores the importance of responsible AI use, giving you the tools to create dynamic applications while being mindful of ethical considerations.

Full Video & Source Code
 

Oct 24

Building an OpenAI o1 Clone with Nemotron,  RunPod, and OpenWebUI

In this video, you'll learn how to build your own OpenAI o1 clone using NVIDIA's Llama 3.1 Nemotron 70B model, Ollama, RunPod, and OpenWebUI.

You'll explore how to:

  • Set up Ollama and install the Llama 3.1 Nemotron model locally.
  • Deploy the model on a powerful GPU instance using RunPod to overcome hardware limitations.
  • Configure Ollama for external API access and integrate it with OpenWebUI for a ChatGPT-like interface.
  • Enhance the model's reasoning capabilities through prompt engineering to match OpenAI o1 performance.
  • Ensure data privacy by running advanced AI models on your own infrastructure.

By the end, you'll know how to build and run an OpenAI o1 clone locally or on a rented GPU server, access it through a web interface using OpenWebUI, and customize it for optimal performance—all while maintaining full control over your data.

Full Video & Source Code
 

Oct 24

Enhancing OpenAI Swarm Agents with Real Business Data and Email Integration

In this video, you'll learn how to enable your AI agents built with OpenAI's Swarm framework to access real business data from a vector store and send real emails to process bookings.

You'll explore how to:

  • Integrate a vector store (Pinecone) filled with data about an e-bike rental service, including pricing lists, FAQs, and locations.
  • Create an Info Agent that retrieves and utilizes real business data to answer user queries accurately.
  • Implement a booking function that automates sending real emails to customer support using services like MailTrap or your preferred email provider.
  • Modify agents to access real-time data and perform actions based on user inputs.
  • Develop a Triage Agent to route user requests to the appropriate agent based on the query.
  • Facilitate seamless communication and handoffs between multiple agents.
  • Understand how to enable agents to interact with real business operations by accessing data and sending emails.

By the end, you'll know how to enhance your multi-agent system to access real business data, send real emails, and perform complex tasks, thereby creating a more dynamic and interactive AI solution.

Full Video & Source Code
 

Oct 24

Implementing Multi-Agents with OpenAI Swarm

In this video, you'll learn how to use OpenAI's Swarm framework to build and orchestrate multiple AI agents for complex tasks.

You'll explore how to:

  • Set up a virtual environment and install the Swarm library directly from GitHub.
  • Create agents with specific instructions and functions using Swarm and OpenAI's GPT models.
  • Implement function calling to enable agents to execute custom functions.
  • Facilitate communication and handoffs between multiple agents.

By the end, you'll know how to build and interact with AI agents using Swarm, implement function calling, manage multi-agent communication, and enhance your applications with scalable, customizable agent coordination.

Full Video & Source Code
 

Oct 24

Enhancing a Chatbot with Chat History Using HTTP Streaming

In this video, you'll learn how to enhance your retrieval-augmented chatbot by adding chat history functionality using LangChain, Flask, and HTTP streaming.

You'll explore how to:

  • Implement server-side chat history to handle follow-up questions and maintain conversation context.
  • Generate unique user IDs and manage user sessions using local storage.
  • Modify the backend to store and retrieve chat history for each user.
  • Adjust prompts and integrate chat history into the LangChain stream response method.
  • Explain HTTP streaming and facilitate communication between an index.html file with JavaScript and a Python Flask server.
  • Update the frontend to include user IDs in requests and ensure seamless interaction with the backend.

By the end, you'll know how to build a chatbot that can handle multi-turn conversations by maintaining chat history, understand HTTP streaming, and enable seamless communication between your frontend and backend, enhancing the user experience with context-aware responses.

Full Video & Source Code
 

Oct 24

Building a Retrieval-Augmented Chatbot with LangChain, OpenAI, and Pinecone

In this video, you'll learn how to build a retrieval-augmented chatbot using LangChain, OpenAI's GPT models, and Pinecone for vector storage.

You'll explore how to:

  • Set up an OpenAI client to access language models.
  • Create prompts and utilize LangChain's Stuff Document Chain.
  • Replace hard-coded documents with dynamic retrieval from a Pinecone vector store.
  • Implement a retrieval chain to fetch relevant documents based on user queries.
  • Explain HTTP streaming and facilitate communication between an index.html file with JavaScript and a Python Flask server.
  • Develop a Flask server and build a user-friendly web interface to interact with the chatbot in real-time.

By the end, you'll know how to set up and deploy a fully functional chatbot that answers user questions by retrieving and utilizing relevant documents from a vector store, understand HTTP streaming, and enable seamless communication between your frontend and backend for an interactive user experience.

Full Video & Source Code
 

Oct 24

Run LLAMA 3.2 Models Locally with Ollama and Open WebUI

In this video, you'll learn how to run LLAMA 3.2 models locally on your computer using Ollama and OpenWebUI.

You'll explore how to:

  • Download and set up Ollama to run LLAMA models locally via the command line.
  • Interact with LLAMA 3.1 and 3.2 models using the command-line interface.
  • Install Open WebUI with Docker to provide a ChatGPT-like interface for local models.
  • Use OpenWebUI to interact with LLAMA 3.2 models through a user-friendly interface.
  • Compare the performance of LLAMA models on tasks like joke telling, article writing, code generation, and text summarization.
  • Understand the limitations of smaller models for tasks like summarizing large texts.

By the end, you'll know how to set up and run LLAMA models locally using Ollama and OpenWebUI, and understand their capabilities and limitations for various tasks.

Full Video & Source Code
 

Sep 24

Discover LLAMA 3.2 Vision Models and Build an AI-Powered Smile Analyzer App

In this video, you'll gain a comprehensive understanding of the groundbreaking LLAMA 3.2 models released by Meta, including their advanced vision capabilities and how to access them. You'll also build a full-stack app that detects deceptive smiles, effectively analyzing liars by leveraging these new models.

You'll explore how to:

  • Understand the different LLAMA 3.2 models available, including both large vision-capable models and smaller models designed for on-device use.
  • Access LLAMA 3.2 models through platforms like Hugging Face, Grok, and Together AI, overcoming regional restrictions.
  • Test the vision capabilities of LLAMA 3.2 by analyzing images, such as restaurant bills to find savings and challenging "Where's Waldo" puzzles.
  • Use vZero to create an MVP of a smile analyzer app with Next.js, enabling users to upload photos for analysis.
  • Integrate LLAMA 3.2 into your app via Together AI, replacing placeholder logic with actual AI-driven analysis.
  • Utilize Cursor to assist in coding, handling API calls, and adjusting code based on error messages.

By the end, you'll have a functional web app that analyzes smiles to detect deception, and you'll have learned how to leverage LLAMA 3.2's vision capabilities in your own projects using vZero, Together AI, and Cursor.

Watch For Free
 

 Sep 24

Build an AI-Powered Background Remover with Replit & Replicate

 

In this video, you'll learn how to use Replit to quickly build a web app that removes image backgrounds using the Replicate API. 

Please note: This lesson is exclusive to members and won't be available elsewhere.

You'll explore how to:

  • Set up a web app in Replit using the Replicate API to remove image backgrounds.
  • Use Replicate’s background removal model.
  • Troubleshoot issues and adjust code based on error messages.
  • Improve functionality by providing feedback to Replit and making manual adjustments with Python and Flask.

By the end, you'll have a functional web app that removes image backgrounds, and you'll have learned how to resolve common coding problems along the way.

Full Video & Source Code
 

 Sep 24

Secret Hacks for Mastering Cursor AI

In this video, you'll discover three little-known tricks that will transform how you use Cursor AI. While many developers dive in without structure, these secret tips will ensure you're getting the most out of Cursor, allowing you to build smarter, faster, and more efficiently.

You'll learn how to:

  • Plan strategically before even opening Cursor, using secret workflow techniques in tools like vZero for better results
  • Implement a hidden "Cursor Rules" file to secretly dictate Cursor’s behavior, ensuring expert-level code generation every time
  • Seamlessly swap placeholder content for real data with a trick that targets specific files, creating professional apps with minimal effort

By the end, you’ll be leveraging these insider techniques to elevate your Cursor workflow to a whole new level of performance.

Watch For Free
 

 Sep 24

Building an AI Image Generation Web App with Flux and Cursor

In this video, you'll learn how to create a stunning image generation web app using Flux models and the Replicate API, with a strong focus on leveraging Cursor for faster development. You'll start by designing a user-friendly interface with V0, and then use Cursor to automate coding tasks, from setting up the Next.js project to integrating API calls for image generation.

You'll explore how to:

  • Design a clean UI for image generation using V0
  • Use Cursor to automate code generation for API integration and server-side components
  • Build and deploy the project locally with Next.js using Cursor's guidance
  • Efficiently connect the Flux model through Replicate for fast image generation

By the end, you'll have a powerful web app built with the help of Cursor, speeding up your workflow and delivering impressive results.

Watch For Free
 

Sep 24

Building a Freelance Project Scraper with Spider Cloud, OpenAI, and Cursor

In this video, you'll learn how to build a freelance job scraper that pulls data from sites like WeWorkRemotely, using Spider Cloud for scraping and OpenAI's Structured Output feature to organize the results. 

Please note: This lesson is exclusive to members and won't be available elsewhere.

You'll explore how to:

  • Set up Spider Cloud for web scraping and OpenAI for structured data output
  • Automate the extraction and organization of job listings based on custom criteria
  • Use Cursor to enhance code creation and solve errors efficiently
  • Build a Streamlit UI to display and manage the scraped projects

By the end, you'll have a functional job scraper built with the support of Cursor, streamlining both development and data management.

 
Full Video & Source Code
 

Aug 24

Building a Chat Agent for an E-Bike Rental Business with N8N and Pinecone

In this video, you'll learn how to create a customer-facing AI chat agent for an e-bike rental business using N8N, Pinecone, and OpenAI. The AI agent is designed to answer customer questions, access relevant data from files, and provide accurate responses. Cursor is used to speed up code generation and debugging throughout the development process.

You'll explore how to:

  • Set up N8N to automate workflows and connect to Google Drive for file access
  • Build a workflow that processes client-provided data and stores it in Pinecone
  • Create a chat agent with short-term memory that accesses a vector store for real-time customer support
  • Integrate the chat agent into a website using a provided endpoint and custom code

By the end, you'll have a functional AI chat agent capable of handling customer queries, built efficiently with the help of Cursor.

 
Watch For Free
 

Aug 24

Accepting Payments with Stripe in an AI App

In this video, you'll learn how to integrate Stripe payments into a Next.js application, enabling you to monetize AI solutions built with OpenAI, Claude 3.5, or Autonomous Agents. 

Please note: This lesson is exclusive to members and won't be available elsewhere.

You'll explore how to:

  • Set up a Next.js app with Stripe integration
  • Create an API endpoint to generate a payment intent and handle client secrets
  • Build a Stripe checkout form to securely process payments
  • Redirect users to a confirmation page upon successful payment

By the end, you'll have a fully functional payment system integrated into your AI-powered web application.

Full Video & Source Code
 

Aug 24

Introducing Agent Zero: Automating Tasks with AI Agents

In this video, you'll learn how to use Agent Zero, an open-source framework powered by the latest language models like GPT-4 Mini, to automate tasks, fetch real-time information, and even create games from scratch. 

Please note: This lesson is exclusive to members and won't be available elsewhere.

You'll explore how to:

  • Set up Agent Zero by cloning the repository and installing dependencies
  • Use OpenAI and Perplexity APIs to power Agent Zero's functionality
  • Automate tasks such as fetching weather data or building a simple game
  • Customize Agent Zero’s language models and prompts to improve performance

By the end, you'll have a solid understanding of how to use Agent Zero to automate various tasks with minimal setup.

Full Video & Source Code