❌

Reading view

There are new articles available, click to refresh the page.

Let’s Build a Bank Account Simulation

Problem Statement

🧩 Overview

Build a bank system to create and manage user accounts, simulate deposits, withdrawals, interest accrual, and overdraft penalties.

🎯 Goals

  • Support multiple account types with different rules
  • Simulate real-world banking logic like minimum balance and interest
  • Track user actions securely

πŸ— Suggested Classes

  • BankAccount: account_number, owner, balance, deposit(), withdraw()
  • SavingsAccount(BankAccount): interest_rate, apply_interest()
  • CheckingAccount(BankAccount): minimum_balance, penalty
  • User: name, password, accounts[]

πŸ“¦ Features

  • Create new accounts (checking/savings)
  • Deposit money to account
  • Withdraw money (with rules):
    • Checking: maintain minimum balance or pay penalty
    • Savings: limit to 3 withdrawals/month
  • Apply interest monthly for savings
  • Show account summary

πŸ”§ OOP Concepts

  • Inheritance: SavingsAccount and CheckingAccount from BankAccount
  • Encapsulation: Balance, account actions hidden inside methods
  • Polymorphism: Overridden withdraw() method in each subclass

πŸ”Œ Optional Extensions

  • Password protection (simple CLI input masking)
  • Transaction history with timestamp
  • Monthly bank statement generation

Let’s Build a Library Management System With OOPS

Problem Statement

🧩 Overview

Design a command-line-based Library Management System that simulates the basic operations of a library for both users and administrators. It should manage books, user accounts, borrowing/returning of books, and enforce library rules like book limits per member.

🎯 Goals

  • Allow members to search, borrow, and return books.
  • Allow admins to manage the library’s inventory.
  • Track book availability.
  • Prevent double borrowing of a book.

πŸ‘€ Actors

  • Admin
  • Member

πŸ— Suggested Classes

  • Book: ID, title, author, genre, is_available
  • User: username, role, user_id
  • Member(User): borrowed_books (max 3 at a time)
  • Admin(User): can add/remove books
  • Library: manages collections of books and users

πŸ“¦ Features

  • Admin:
    • Add a book with metadata
    • Remove a book by ID or title
    • List all books
  • Member:
    • Register or login
    • View available books
    • Borrow a book (limit 3)
    • Return a book
  • Library:
    • Handles storage, availability, and user-book mappings

πŸ”§ OOP Concepts

  • Inheritance: Admin and Member inherit from User
  • Encapsulation: Book’s availability status and member’s borrow list
  • Polymorphism: Different view_dashboard() method for Admin vs Member

πŸ”Œ Optional Extensions

  • Track borrowing history (borrow date, return date)
  • Due dates and overdue penalties
  • Persistent data storage (JSON or SQLite)

Redis Strings – The Building Blocks of Key Value Storage

Redis is famously known as an in-memory data structure store, often used as a database, cache, and message broker. The simplest and most fundamental data type in Redis is the string. This blog walks through everything you need to know about Redis strings with practical examples.

What Are Redis Strings?

In Redis, a string is a binary-safe sequence of bytes. That means it can contain any kind of data text, integers, or even serialized objects.

  • Maximum size: 512 MB
  • Default behavior: key-value pair storage

Common String Commands

Let’s explore key-value operations you can perform on Redis strings using the redis-cli.

1. SET – Assign a Value to a Key

SET user:1:name "Alice"

This sets the key user:1:name to the value "Alice".

2. GET – Retrieve a Value by Key

GET user:1:name
# Output: "Alice"

3. EXISTS – Check if a Key Exists

EXISTS user:1:name
# Output: 1 (true)

4. DEL – Delete a Key

DEL user:1:name

5. SETEX – Set Value with Expiry (TTL)

SETEX session:12345 60 "token_xyz"

This sets session:12345 with value token_xyz that expires in 60 seconds.

6. INCR / DECR – Numeric Operations

SET views:homepage 0
INCR views:homepage
INCR views:homepage
DECR views:homepage
GET views:homepage
# Output: "1"

7. APPEND – Append to Existing String

SET greet "Hello"
APPEND greet ", World!"
GET greet
# Output: "Hello, World!"

8. MSET / MGET – Set or Get Multiple Keys at Once

MSET product:1 "Book" product:2 "Pen"
MGET product:1 product:2
# Output: "Book" "Pen"

Gotchas to Watch Out For

  1. String size limit: 512 MB per key.
  2. Atomic operations: INCR, DECR are atomic – ideal for counters.
  3. Expire keys: Always use TTL for session-like data to avoid memory bloat.
  4. Binary safety: Strings can hold any binary data, including serialized objects.

Use Redis with Python

import redis

r = redis.Redis(host='localhost', port=6379, db=0)
r.set('user:1:name', 'Alice')
print(r.get('user:1:name').decode())

Code Less, Prompt Better: Unlocking Python's Built-in LLM Enhancers

In the rapidly evolving landscape of Large Language Models (LLMs), effective prompt engineering has become a crucial skill. While much attention is given to the art of crafting effective prompts, less focus has been placed on how to efficiently manage these prompts programmatically. Python, with its rich set of built-in features, offers powerful tools to dynamically construct, optimize, and manage LLM prompts.
This article explores how Python's built-in features can transform your approach to LLM prompt engineering, making your code more efficient, maintainable, and powerful.

1. Using locals() for Dynamic Context Injection

The Problem
When working with LLMs, we often need to inject contextual information into our prompts. The traditional approach involves manual string formatting:

def generate_response(user_name, user_query, previous_context):
    prompt = f"""
    User name: {user_name}
    User query: {user_query}
    Previous context: {previous_context}

    Please respond to the user's query considering the context above.
    """

    return call_llm_api(prompt)

This works well for simple cases, but becomes unwieldy as the number of variables increases. It's also error-prone – you might forget to include a variable or update a variable name.

The Solution with locals()
Python's locals() function returns a dictionary containing all local variables in the current scope. We can leverage this to automatically include all relevant context:

def generate_response(user_name, user_query, previous_context, user_preferences=None, user_history=None):
    # All local variables are now accessible
    context_dict = locals()

    # Build a dynamic prompt section with all available context
    context_sections = []
    for key, value in context_dict.items():
        if value is not None:  # Only include non-None values
            context_sections.append(f"{key}: {value}")

    context_text = "\n".join(context_sections)

    prompt = f"""
    Context information:
    {context_text}

    Please respond to the user's query considering the context above.
    """

    return call_llm_api(prompt)

Benefits:

Automatic variable inclusion: If you add a new parameter to your function, it's automatically included in the context.
Reduced errors: No need to manually update string formatting when variables change.
Cleaner code: Separates the mechanism of context injection from the specific variables.

2. Using inspect for Function Documentation

The Problem
When creating LLM prompts that involve function execution or code generation, providing accurate function documentation is crucial:

def create_function_prompt(func_name, params):
    prompt = f"""
    Create a Python function named '{func_name}' with the following parameters:
    {params}
    """
    return prompt

This approach requires manually specifying function details, which can be tedious and error-prone.

The Solution with inspect
Python's inspect module allows us to extract rich metadata from functions:

import inspect

def create_function_prompt(func_reference):
    # Get the function signature
    signature = inspect.signature(func_reference)

    # Get the function docstring
    doc = inspect.getdoc(func_reference) or "No documentation available"

    # Get source code if available
    try:
        source = inspect.getsource(func_reference)
    except:
        source = "Source code not available"

    prompt = f"""
    Function name: {func_reference.__name__}

    Signature: {signature}

    Documentation:
    {doc}

    Original source code:
    {source}

    Please create an improved version of this function.
    """

    return prompt

# Example usage
def example_func(a, b=10):
    """This function adds two numbers together."""
    return a + b

improved_function_prompt = create_function_prompt(example_func)
# Send to LLM for improvement

This dynamically extracts all relevant information about the function, making the prompt much more informative.

3. Context Management with Class Attributes

The Problem
Managing conversation history and context with LLMs often leads to repetitive code:

conversation_history = []

def chat_with_llm(user_input):
    # Manually build the prompt with history
    prompt = "Previous conversation:\n"
    for entry in conversation_history:
        prompt += f"{entry['role']}: {entry['content']}\n"

    prompt += f"User: {user_input}\n"
    prompt += "Assistant: "

    response = call_llm_api(prompt)

    # Update history
    conversation_history.append({"role": "User", "content": user_input})
    conversation_history.append({"role": "Assistant", "content": response})

    return response

The Solution with Class Attributes and dict
We can create a conversation manager class that uses Python's object attributes:

class ConversationManager:
    def __init__(self, system_prompt=None, max_history=10):
        self.history = []
        self.system_prompt = system_prompt
        self.max_history = max_history
        self.user_info = {}
        self.conversation_attributes = {
            "tone": "helpful",
            "style": "concise",
            "knowledge_level": "expert"
        }

    def add_user_info(self, **kwargs):
        """Add user-specific information to the conversation context."""
        self.user_info.update(kwargs)

    def set_attribute(self, key, value):
        """Set a conversation attribute."""
        self.conversation_attributes[key] = value

    def build_prompt(self, user_input):
        """Build a complete prompt using object attributes."""
        prompt_parts = []

        # Add system prompt if available
        if self.system_prompt:
            prompt_parts.append(f"System: {self.system_prompt}")

        # Add conversation attributes
        prompt_parts.append("Conversation attributes:")
        for key, value in self.conversation_attributes.items():
            prompt_parts.append(f"- {key}: {value}")

        # Add user info if available
        if self.user_info:
            prompt_parts.append("\nUser information:")
            for key, value in self.user_info.items():
                prompt_parts.append(f"- {key}: {value}")

        # Add conversation history
        if self.history:
            prompt_parts.append("\nConversation history:")
            for entry in self.history[-self.max_history:]:
                prompt_parts.append(f"{entry['role']}: {entry['content']}")

        # Add current user input
        prompt_parts.append(f"\nUser: {user_input}")
        prompt_parts.append("Assistant:")

        return "\n".join(prompt_parts)

    def chat(self, user_input):
        """Process a user message and get response from LLM."""
        prompt = self.build_prompt(user_input)

        response = call_llm_api(prompt)

        # Update history
        self.history.append({"role": "User", "content": user_input})
        self.history.append({"role": "Assistant", "content": response})

        return response

    def get_state_as_dict(self):
        """Return a dictionary of the conversation state using __dict__."""
        return self.__dict__

    def save_state(self, filename):
        """Save the conversation state to a file."""
        import json
        with open(filename, 'w') as f:
            json.dump(self.get_state_as_dict(), f)

    def load_state(self, filename):
        """Load the conversation state from a file."""
        import json
        with open(filename, 'r') as f:
            state = json.load(f)
            self.__dict__.update(state)```



Using this approach:

# Create a conversation manager
convo = ConversationManager(system_prompt="You are a helpful assistant.")

# Add user information
convo.add_user_info(name="John", expertise="beginner", interests=["Python", "AI"])

# Set conversation attributes
convo.set_attribute("tone", "friendly")

# Chat with the LLM
response = convo.chat("Can you help me understand how Python dictionaries work?")
print(response)

# Later, save the conversation state
convo.save_state("conversation_backup.json")

# And load it back
new_convo = ConversationManager()
new_convo.load_state("conversation_backup.json")

4. Using dir() for Object Exploration

The Problem
When working with complex objects or APIs, it can be challenging to know what data is available to include in prompts:



def generate_data_analysis_prompt(dataset):
    # Manually specifying what we think is available
    prompt = f"""
    Dataset name: {dataset.name}
    Number of rows: {len(dataset)}

    Please analyze this dataset.
    """
    return prompt

The Solution with dir()
Python's dir() function lets us dynamically discover object attributes and methods:


def generate_data_analysis_prompt(dataset):
    # Discover available attributes
    attributes = dir(dataset)

    # Filter out private attributes (those starting with _)
    public_attrs = [attr for attr in attributes if not attr.startswith('_')]

    # Build metadata section
    metadata = []
    for attr in public_attrs:
        try:
            value = getattr(dataset, attr)
            # Only include non-method attributes with simple values
            if not callable(value) and not hasattr(value, '__dict__'):
                metadata.append(f"{attr}: {value}")
        except:
            pass  # Skip attributes that can't be accessed

    metadata_text = "\n".join(metadata)

    prompt = f"""
    Dataset metadata:
    {metadata_text}

    Please analyze this dataset based on the metadata above.
    """

    return prompt


This approach automatically discovers and includes relevant metadata without requiring us to know the exact structure of the dataset object in advance.

5. String Manipulation for Prompt Cleaning

The Problem
User inputs and other text data often contain formatting issues that can affect LLM performance:



def process_document(document_text):
    prompt = f"""
    Document:
    {document_text}

    Please summarize the key points from this document.
    """
    return call_llm_api(prompt)


The Solution with String Methods
Python's rich set of string manipulation methods can clean and normalize text:



def process_document(document_text):
    # Remove excessive whitespace
    cleaned_text = ' '.join(document_text.split())

    # Normalize line breaks
    cleaned_text = cleaned_text.replace('\r\n', '\n').replace('\r', '\n')

    # Limit length (many LLMs have token limits)
    max_chars = 5000
    if len(cleaned_text) > max_chars:
        cleaned_text = cleaned_text[:max_chars] + "... [truncated]"

    # Replace problematic characters
    for char, replacement in [('\u2018', "'"), ('\u2019', "'"), ('\u201c', '"'), ('\u201d', '"')]:
        cleaned_text = cleaned_text.replace(char, replacement)

    prompt = f"""
    Document:
    {cleaned_text}

    Please summarize the key points from this document.
    """

    return call_llm_api(prompt)


Conclusion

Python's built-in features offer powerful capabilities for enhancing LLM prompts:

Dynamic Context: Using locals() and dict to automatically include relevant variables
Introspection: Using inspect and dir() to extract rich metadata from objects and functions
String Manipulation: Using Python's string methods to clean and normalize text

By leveraging these built-in features, you can create more robust, maintainable, and dynamic LLM interactions. The techniques in this article can help you move beyond static prompt templates to create truly adaptive and context-aware LLM applications.
Most importantly, these approaches scale well as your LLM applications become more complex, allowing you to maintain clean, readable code while supporting sophisticated prompt engineering techniques.
Whether you're building a simple chatbot or a complex AI assistant, Python's built-in features can help you create more effective LLM interactions with less code and fewer errors.

TamilKavi: Release of Python Package & Dataset

Hi guysΒ πŸ‘‹

Today, I want to share something unexpected. To be honest, if someone had told me a month ago that I could do this, I wouldn’t have believed them. But here we areβ€Šβ€”β€ŠI’ve finally released a Python package and dataset called TamilKavi. I still can’t believe I pulled it off, but it’sΒ real!

I’d love to share the whole story with you. Many of you already know meβ€Šβ€”β€ŠI write Tamil poetry and have even published two books. However, I faced font issues when trying to release them on Amazon and Kindle. Frustrated, I reached out to my community friend, Hari, and I asked them:
β€œBro, I want to release my Tamil poetry book on Amazon, but I’m stuck with font issues. Do you know anyone who can solveΒ it?”

Hari referred me to Ayyanar Bro, and to me it’s a surprise, he was from Maduraiβ€Šβ€”β€Šwhat a coincidence! We spoke almost four times a week for different reasons. I had already written about him and his portfolio website, which he built using Emacs & Org, so I won’t go into more detailsβ€Šβ€”β€Šyou guys might find it repetitive.

Through Ayyanar Bro, I learned about the Tamil Kanchilung community and FreeTamilBooks, where I finally found a solution to my font issue. But here’s another twistβ€Šβ€”β€ŠFreeTamilBooks required more poetry for my book release. Because I like to release that in FreeTamilBooks.Then another book on Amazon. That was another headache because, with my tight schedule, I barely had time toΒ write.

While navigating all this, I discovered Tamilrulepy, a Python package with Tamil grammar rules. I was eager to learn more, and unexpectedly, I got an opportunity to contribute to it! That’s when I met Boopalanβ€Šβ€”β€Šanother passionate tech enthusiast like me. He helped me write code for TamilRulePy and even invited me to contribute to TamilString, a Python package for documentation. I accepted his invitation and started working onΒ it.

Then, during one of our conversations, I got an idea: why not develop my own Python package? And that’s how TamilKavi wasΒ born.

I shared my idea with Boopalan and invited him to build it as a team because, honestly, I’m no expert. But it wasn’t easyβ€Šβ€”β€Šwe had to overcome countless challenges, especially since we were both preparing for our model exams and semester exams (he’s an MSc student, and I’m a BSc student). It was a tough time, but I didn’t give up. I studied, understood, and gradually started codingβ€Šβ€”β€Šnot entirely on my own, ofΒ course.

Now, you might wonderβ€Šβ€”β€Šwhy build a website? Simple: to collect data from authors. But due to financial constraints, the data collected through the website idea transformed into a Google Form, and now it is a navigation button. It’s another story altogether. Since I had no time, I built a basic structure using Lovable.dev and handed it over to my juniors, Gagan & Rohith, who took care of theΒ website.

The final result? Release of the Python package &Β website!

I must especially thank Praveen Broβ€Šβ€”β€Šmy community brother and mentor. Without hesitation, he offered me a subdomain. For me, that’s a huge deal, and I’m incredibly grateful!

β€œOkay thambi, enough of this English talkβ€Šβ€”β€Šwhy did you release the dataset?” When you ask me likewise.

Well, there’s a reason for that, too. I’ve seen Selvakumar Duraipandian Bro on LinkedIn about their post of numerous Tamil datasets on Hugging Face, including Thirukkural, Tholkappiyam, and more. I was truly inspired by his work. So, I release that as aΒ Dataset.

Now, you might ask, β€œSo, thambi, after all this talk, what does your package actuallyΒ do?”

It’s simpleβ€Šβ€”β€ŠTamilKavi helps discover new Tamil poems. That’s all. Now your mindΒ is

β€œEdhuka evaloΒ seenu?”

Well, I’m not just a developer. The person who is are Tamil poet & tech enthusiast, it’s a crazy project. Through this journey, I’ve learned so much, especially about GitHub workflows.

When you feel this content is valuable, follow me for more upcomingΒ Blogs.

Connect withΒ Me:

TASK 1: Python – Print exercises

1.How do you print the string β€œHello, world!” to the screen?
Ans: Using sep and end Parameters is preferred way.
Image description

2.How do you print the value of a variable name which is set to β€œSyed Jafer” or Your name?
Ans: Note the Variable is case sensitive

Image description

3.How do you print the variables name, age, and city with labels β€œName:”, β€œAge:”, and β€œCity:”?
Ans: Using keyword sep="," brings in , between the Variable Name and Value itself, so avoid

Image description

4.How do you use an f-string to print name, age, and city in the format β€œName: …, Age: …, City: …”?
Ans: To insert variables directly into the string used f-string
Also, you can assign values to multiple variables in a single line as seen in 1st line

Image description

5.How do you concatenate and print the strings greeting (β€œHello”) and target (β€œworld”) with a space between them?
Ans: + is used to concat the items

Image description

6.How do you print three lines of text with the strings β€œLine1”, β€œLine2”, and β€œLine3” on separate lines?****

Image description

7.How do you print the string He said, "Hello, world!" including the double quotes?
Ans: To print quotes inside a string, you can use either single or double quotes to enclose the string and the other type of quotes inside it.

Image description

8.How do you print the string C:\Users\Name without escaping the backslashes?
Ans: you can also use a literal backslash "\" when using Concat or Try with 'r' to treat backslashes as literal characters

Image description

9.How do you print the result of the expression 5 + 3?

Image description

10.How do you print the strings β€œHello” and β€œworld” separated by a hyphen -?

Image description

11.How do you print the string β€œHello” followed by a space, and then print β€œworld!” on the same line?

Image description

12.How do you print the value of a boolean variable is_active which is set to True?

Image description

13.How do you print the string β€œHello ” three times in a row?

Image description

14.How do you print the sentence The temperature is 22.5 degrees Celsius. using the variable temperature?

Image description

15.How do you print name, age, and city using the .format() method in the format β€œName: …, Age: …, City: …”?

Image description

16.How do you print the value of pi (3.14159) rounded to two decimal places in the format The value of pi is approximately 3.14?
Ans: pi is the variable & .2f formats it as a floating-point number with 2 digits after the decimal

Image description

17.How do you print the words β€œleft” and β€œright” with β€œleft” left-aligned and β€œright” right-aligned within a width of 10 characters each?

Image description

Write a video using open-cv

Use open-cv VideoWriter function to write a video

Source Code

import cv2

video = cv2.VideoCapture("./data/video.mp4")
fourcc = cv2.VideoWriter.fourcc(*'FMP4')
writeVideo = cv2.VideoWriter('./data/writeVideo.mp4',fourcc,24,(1080,720))

while(video.isOpened()):
    suc, frame = video.read()
    if(suc):
        frame = cv2.resize(frame,(1080,720))
        cv2.imshow("write video",frame)
        writeVideo.write(frame)
        if(cv2.waitKey(24)&0xFF == ord('q')):
            break
    else:
        break

writeVideo.release()
video.release()
cv2.destroyAllWindows()

Video

Pre-Required Knowledge

If you know OpenCV, you can use it to open a video. If you don’t know this, visit this open video blog.

Functions

Explain Code

Import open-cv Library import cv2
Open a Video Using videoCapture Function

fourcc

The fourcc function is used to specify a video codec.
Example: AVI format codec for XVID.

VideoWriter

The videoWriter function initializes the writeVideo object. it specify video properties such as codec, FPS, and resolution.
There are four arguments:

  1. Video Path: Specifies the video write path and video name.
  2. fourcc: Specifies the video codec.
  3. FPS: Sets an FPS value.
  4. Resolution: Sets the video resolution.

The read() function is used to read a frame.

After reading a frame, resize() it.
Note: If you set a resolution in writeVideo, you must resize the frame to the same resolution.

write

This function writes a video frame by frame into the writeVideo object.

The waitKey function is used to delay the program and check key events for program exit using an if condition.

Release objects

Once the writing process is complete, release the writeVideo and video objects to finalize the video writing process.

Additional Link

github code

open-cv write image

Explore the OpenCV imwrite function used to write an image.

Source Code

import cv2

image = cv2.imread("./data/openCV_logo.jpg",cv2.IMREAD_GRAYSCALE)
image = cv2.resize(image,(600,600))
cv2.imwrite("./data/openCV_logo_grayscale.jpg",image)

Image

Function

imwrite()

Explain Code

Import the OpenCV library import cv2.

The imread function reads an image. Since I need a grayscale image, I set the flag value as cv2.IMREAD_GRAYSCALE.

Resize the image using the resize() function.

imwrite

The imwrite function is used to save an image. It takes two arguments:

  1. Image path – Set the image path and name.
  2. Image – The image as a NumPy array.

Additional Link

github code

open-cv open video

Playing a video in OpenCV is similar to opening an image, but it requires a loop to continuously read multiple frames.

Source Code

import cv2

video = cv2.VideoCapture("./data/video.mp4")

while(video.isOpened()):
    isTrue, frame = video.read()
    
    if(isTrue):
        frame = cv2.resize(frame,(800,500))
        cv2.imshow("play video",frame)
        if(cv2.waitKey(24)&0xFF == ord('q')):
            break
    else:
        break

video.release()
cv2.destroyAllWindows()

Video

Functions

Explain Program

Import OpenCV Library

import cv2

VideoCapture

This function is used to open a video by specifying a video path.

  • If you pass 0 as the argument, it opens the webcam instead.

isOpened

This function returns a boolean value to check if the video or resource is opened properly.

Use while to start a loop. with the condition isOpened().

read

This function reads a video frame by frame.

  • It returns two values:
    1. Boolean: True if the frame is read successfully.
    2. Frame Data: The actual video frame.

Use if(isTrue) to check if the data is properly read, then show the video.

  • Resize the video resolution using resize function.
  • Show the video using imshow.
  • Exit video on keypress if(cv2.waitKey(24)&0xFF == ord('q')).
    • Press β€˜qβ€˜ to break the video play loop.
Why Use &0xFF ?
  • This ensures the if condition runs correctly.
  • waitKey returns a key value, then performs an AND operation with 0xFF (which is 255 in hexadecimal).
  • If any number is used in an AND operation with 0xFF, it returns the same number.
    Example: 113 & 0xFF = 113 (same value as the first operand).

ord

The ord function returns the ASCII value of a character.

  • Example: ord('q') returns 113.

Finally, the if condition is validated.
If true, break the video play. Otherwise, continue playing.

release

This function releases the used resources.

destroyAllWindows() closes all windows and cleans up used memory.

Additional Link

github code

The Intelligent Loop: A Guide to Modern LLM Agents

Introduction

Large Language Model (LLM) based AI agents represent a new paradigm in artificial intelligence. Unlike traditional software agents, these systems leverage the powerful capabilities of LLMs to understand, reason, and interact with their environment in more sophisticated ways. This guide will introduce you to the basics of LLM agents and their think-act-observe cycle.

What is an LLM Agent?

An LLM agent is a system that uses a large language model as its core reasoning engine to:

  1. Process natural language instructions
  2. Make decisions based on context and goals
  3. Generate human-like responses and actions
  4. Interact with external tools and APIs
  5. Learn from interactions and feedback

Think of an LLM agent as an AI assistant who can understand, respond, and take actions in the digital world, like searching the web, writing code, or analyzing data.

Image description

The Think-Act-Observe Cycle in LLM Agents

Observe (Input Processing)

LLM agents observe their environment through:

  1. Direct user instructions and queries
  2. Context from previous conversations
  3. Data from connected tools and APIs
  4. System prompts and constraints
  5. Environmental feedback

Think (LLM Processing)

The thinking phase for LLM agents involves:

  1. Parsing and understanding input context
  2. Reasoning about the task and requirements
  3. Planning necessary steps to achieve goals
  4. Selecting appropriate tools or actions
  5. Generating natural language responses

The LLM is the "brain," using its trained knowledge to process information and make decisions.

Act (Execution)

LLM agents can take various actions:

  1. Generate text responses
  2. Call external APIs
  3. Execute code
  4. Use specialized tools
  5. Store and retrieve information
  6. Request clarification from users

Key Components of LLM Agents

Core LLM

  1. Serves as the primary reasoning engine
  2. Processes natural language input
  3. Generates responses and decisions
  4. Maintains conversation context

Working Memory

  1. Stores conversation history
  2. Maintains current context
  3. Tracks task progress
  4. Manages temporary information

Tool Use

  1. API integrations
  2. Code execution capabilities
  3. Data processing tools
  4. External knowledge bases
  5. File manipulation utilities

Planning System

  1. Task decomposition
  2. Step-by-step reasoning
  3. Goal tracking
  4. Error handling and recovery

Types of LLM Agent Architectures

Simple Agents

  1. Single LLM with basic tool access
  2. Direct input-output processing
  3. Limited memory and context
  4. Example: Basic chatbots with API access

ReAct Agents

  1. Reasoning and Acting framework
  2. Step-by-step thought process
  3. Explicit action planning
  4. Self-reflection capabilities

Chain-of-Thought Agents

  1. Detailed reasoning steps
  2. Complex problem decomposition
  3. Transparent decision-making
  4. Better error handling

Multi-Agent Systems

  1. Multiple LLM agents working together
  2. Specialized roles and capabilities
  3. Inter-agent communication
  4. Collaborative problem-solving

Common Applications

LLM agents are increasingly used for:

  1. Personal assistance and task automation
  2. Code generation and debugging
  3. Data analysis and research
  4. Content creation and editing
  5. Customer service and support
  6. Process automation and workflow management

Best Practices for LLM Agent Design

Clear Instructions

  1. Provide explicit system prompts
  2. Define constraints and limitations
  3. Specify available tools and capabilities
  4. Set clear success criteria

Effective Memory Management

  1. Implement efficient context tracking
  2. Prioritize relevant information
  3. Clean up unnecessary data
  4. Maintain conversation coherence

Robust Tool Integration

  1. Define clear tool interfaces
  2. Handle API errors gracefully
  3. Validate tool outputs
  4. Monitor resource usage

Safety and Control

  1. Implement ethical guidelines
  2. Add safety checks and filters
  3. Monitor agent behavior
  4. Maintain user control

Ever Wonder How AI "Sees" Like You Do? A Beginner's Guide to Attention

Understanding Attention in Large Language Models: A Beginner's Guide

Have you ever wondered how ChatGPT or other AI models can understand and respond to your messages so well? The secret lies in a mechanism called ATTENTION - a crucial component that helps these models understand relationships between words and generate meaningful responses. Let's break it down in simple terms!

What is Attention?

Imagine you're reading a long sentence: "The cat sat on the mat because it was comfortable." When you read "it," your brain naturally connects back to either "the cat" or "the mat" to understand what "it" refers to. This is exactly what attention does in AI models - it helps the model figure out which words are related to each other.

How Does Attention Work?

The attention mechanism works like a spotlight that can focus on different words when processing each word in a sentence. Here's a simple breakdown:

  1. For each word, the model calculates how important every other word is in relation to it.
  2. It then uses these importance scores to create a weighted combination of all words.
  3. This helps the model understand context and relationships between words.

Let's visualize this with an example:

Image description

In this diagram, the word "it" is paying attention to all other words in the sentence. The thickness of the arrows could represent the attention weights. The model would likely assign higher attention weights to "cat" and "mat" to determine which one "it" refers to.

Multi-Head Attention: Looking at Things from Different Angles

In modern language models, we don't just use one attention mechanism - we use several in parallel! This is called Multi-Head Attention. Each "head" can focus on different types of relationships between words.

Let's consider the sentence: The chef who won the competition prepared a delicious meal.

  • Head 1 could focus on subject-verb relationships (chef - prepared)
  • Head 2 might attend to adjective-noun pairs (delicious - meal)
  • Head 3 could look at broader context (competition - meal)

Here's a diagram:

Image description

This multi-headed approach helps the model understand text from different perspectives, just like how we humans might read a sentence multiple times to understand different aspects of its meaning.

Why Attention Matters

Attention mechanisms have revolutionized natural language processing because they:

  1. Handle long-range dependencies better than previous methods.
  2. Can process input sequences in parallel.
  3. Create interpretable connections between words.
  4. Allow models to focus on relevant information while ignoring irrelevant parts.

Recent Developments and Research

The field of LLMs is rapidly evolving, with new techniques and insights emerging regularly. Here are a few areas of active research:

Contextual Hallucinations

Large language models (LLMs) can sometimes hallucinate details and respond with unsubstantiated answers that are inaccurate with respect to the input context.

The Lookback Lens technique analyzes attention patterns to detect when a model might be generating information not present in the input context.

Extending Context Window

Researchers are working on extending the context window sizes of LLMs, allowing them to process longer text sequences.

Conclusion

While the math behind attention mechanisms can be complex, the core idea is simple: help the model focus on the most relevant parts of the input when processing each word. This allows language models to understand the context and relationships between words better, leading to more accurate and coherent responses.

Remember, this is just a high-level overview - there's much more to learn about attention mechanisms! Hopefully, this will give you a good foundation for understanding how modern AI models process and understand text.

Benefits of Binary Insertion Sort Explained

Introduction

Binary insertion sort is a sorting algorithm similar to insertion sort, but instead of using linear search to find the position where the element should be inserted, we use binary search.

Thus, we reduce the number of comparisons for inserting one element from O(N) (Time complexity in Insertion Sort) to O(log N).

Best of two worlds

Binary insertion sort is a combination of insertion sort and binary search.

Insertion sort is sorting technique that works by finding the correct position of the element in the array and then inserting it into its correct position. Binary search is searching technique that works by finding the middle of the array for finding the element.

As the complexity of binary search is of logarithmic order, the searching algorithm’s time complexity will also decrease to of logarithmic order. Implementation of binary Insertion sort. this program is a simple Insertion sort program but instead of the standard searching technique binary search is used.

How Binary Insertion Sort works ?

Process flow

In binary insertion sort, we divide the array into two subarrays β€” sorted and unsorted. The first element of the array is in the sorted subarray, and the rest of the elements are in the unsorted one.

We then iterate from the second element to the last element. For the i-th iteration, we make the current element our β€œkey.” This key is the element that we have to add to our existing sorted subarray.

Example

Consider the array 29, 10, 14, 37, 14

First Pass

Key = 1

Since we consider the first element is in the sorted array, we will be starting from the second element. Then we apply the binary search on the sorted array.

In this scenario, we can see that the middle element in sorted array (29) is greater than the key element 10. So the position of the key element is 0. Then we can shift the remaining elements by 1 position.

Increment the value of key.

Second Pass

Key = 2

Now the key element is 14. We will apply binary search in the sorted array to find the position of the key element.

In this scenario, by applying binary search, we can see key element to be placed at index 1 (between 10 and 29). Then we can shift the remaining elements by 1 position.

Third Pass

Key = 3

Now the key element is 37. We will apply binary search in the sorted array to find the position of the key element.

In this scenario, by applying binary search, we can see key element is placed in its correct position.

Fourth Pass

Key = 4

Now the key element is 14. We will apply binary search in the sorted array to find the position of the key element.

In this scenario, by applying binary search, we can see key element to be placed at index 2 (between 14 and 29). Then we can shift the remaining elements by 1 position.

Now we can see all the elements are sorted.

def binary_search(arr, key, start, end):
    if start == end:
        if arr[start] > key:
            return start
        else:
            return start+1
 
    if start > end:
        return start
 
    mid = (start+end)//2
    if arr[mid] < key:
        return binary_search(arr, key, mid+1, end)
    elif arr[mid] > key:
        return binary_search(arr, key, start, mid-1)
    else:
        return mid
 
def insertion_sort(arr):
    total_num = len(arr)
    for i in range(1, total_num):
        key = arr[i]
        j = binary_search(arr, key, 0, i-1)
        arr = arr[:j] + [key] + arr[j:i] + arr[i+1:]
    return arr
 

sorted_array = insertion_sort([29, 10, 14, 37, 14])
print("Sorted Array : ", sorted_array)

Psuedocode

Consider the array Arr,

  1. Iterate the array from the second element to the last element.
  2. Store the current element Arr[i] in a variable key.
  3. Find the position of the element just greater than Arr[i] in the subarray from Arr[0] to Arr[i-1] using binary search. Say this element is at index pos.
  4. Shift all the elements from index pos to i-1 towards the right.
  5. Arr[pos] = key.

Complexity Analysis

Worst Case

For inserting the i-th element in its correct position in the sorted, finding the position (pos) will take O(log i) steps. However, to insert the element, we need to shift all the elements from pos to i-1. This will take i steps in the worst case (when we have to insert at the starting position).

We make a total of N insertions. so, the worst-case time complexity of binary insertion sort is O(N^2).

This occurs when the array is initially sorted in descending order.

Best Case

The best case will be when the element is already in its sorted position. In this case, we don’t have to shift any of the elements; we can insert the element in O(1).

But we are using binary search to find the position where we need to insert. If the element is already in its sorted position, binary search will take (log i) steps. Thus, for the i-th element, we make (log i) operations, so its best-case time complexity is O(N log N).

This occurs when the array is initially sorted in ascending order.

Average Case

For average-case time complexity, we assume that the elements of the array are jumbled. Thus, on average, we will need O(i /2) steps for inserting the i-th element, so the average time complexity of binary insertion sort is O(N^2).

Space Complexity Analysis

Binary insertion sort is an in-place sorting algorithm. This means that it only requires a constant amount of additional space. We sort the given array by shifting and inserting the elements.

Therefore, the space complexity of this algorithm is O(1) if we use iterative binary search. It will be O(logN) if we use recursive binary search because of O(log N) recursive calls.

Is Binary Insertion Sort a stable algorithm

It is a stable sorting algorithm, the elements with the same values appear in the same order in the final array as they were in the initial array.

Cons and Pros

  1. Binary insertion sort works efficiently for smaller arrays.
  2. This algorithm also works well for almost-sorted arrays, where the elements are near their position in the sorted array.
  3. However, when the size of the array is large, the binary insertion sort doesn’t perform well. We can use other sorting algorithms like merge sort or quicksort in such cases.
  4. Making fewer comparisons is also one of the strengths of this sorting algorithm; therefore, it is efficient to use it when the cost of comparison is high.
  5. Its efficient when the cost of comparison between keys is sufficiently high. For example, if we want to sort an array of strings, the comparison operation of two strings will be high.

Bonus Section

Binary Insertion Sort has a quadratic time complexity just as Insertion Sort. Still, it is usually faster than Insertion Sort in practice, which is apparent when comparison takes significantly more time than swapping two elements.

Can UV Transform Python Scripts into Standalone Executables ?

Managing dependencies for small Python scripts has always been a bit of a hassle.

Traditionally, we either install packages globally (not recommended) or create a virtual environment, activate it, and install dependencies manually.

But what if we could run Python scripts like standalone binaries ?

Introducing PEP 723 – Inline Script Metadata

PEP 723 (https://peps.python.org/pep-0723/) introduces a new way to specify dependencies directly within a script, making it easier to execute standalone scripts without dealing with external dependency files.

This is particularly useful for quick automation scripts or one-off tasks.

Consider a script that interacts with an API requiring a specific package,

# /// script
# requires-python = ">=3.11"
# dependencies = [
#   "requests",
# ]
# ///

import requests
response = requests.get("https://api.example.com/data")
print(response.json())

Here, instead of manually creating a requirements.txt or setting up a virtual environment, the dependencies are defined inline. When using uv, it automatically installs the required packages and runs the script just like a binary.

Running the Script as a Third-Party Tool

With uv, executing the script feels like running a compiled binary,

$ uv run fetch-data.py
Reading inline script metadata from: fetch-data.py
Installed dependencies in milliseconds

ehind the scenes, uv creates an isolated environment, ensuring a clean dependency setup without affecting the global Python environment. This allows Python scripts to function as independent tools without any manual dependency management.

Why This Matters

This approach makes Python an even more attractive choice for quick automation tasks, replacing the need for complex setups. It allows scripts to be shared and executed effortlessly, much like compiled executables in other programming environments.

By leveraging uv, we can streamline our workflow and use Python scripts as powerful, self-contained tools without the usual dependency headaches.

Learning Notes #77 – Smoke Testing with K6

In this blog, i jot down notes on what is smoke test, how it got its name, and how to approach the same in k6.

The term smoke testing originates from hardware testing, where engineers would power on a circuit or device and check if smoke appeared.

If smoke was detected, it indicated a fundamental issue, and further testing was halted. This concept was later adapted to software engineering.

What is Smoke Testing?

Smoke testing is a subset of test cases executed to verify that the major functionalities of an application work as expected. If a smoke test fails, the build is rejected, preventing further testing of a potentially unstable application. This test helps catch major defects early, saving time and effort.

Key Characteristics

  • Ensures that the application is not broken in major areas.
  • Runs quickly and is not exhaustive.
  • Usually automated as part of a CI/CD pipeline.

Writing a Basic Smoke Test with K6

A basic smoke test using K6 typically checks API endpoints for HTTP 200 responses and acceptable response times.

import http from 'k6/http';
import { check } from 'k6';

export let options = {
    vus: 1, // 1 virtual user
    iterations: 5, // Runs the test 5 times
};

export default function () {
    let res = http.get('https://example.com/api/health');
    check(res, {
        'is status 200': (r) => r.status === 200,
        'response time < 500ms': (r) => r.timings.duration < 500,
    });
}

Advanced Smoke Test Example

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
    vus: 2, // 2 virtual users
    iterations: 10, // Runs the test 10 times
};

export default function () {
    let res = http.get('https://example.com/api/login');
    check(res, {
        'status is 200': (r) => r.status === 200,
        'response time < 400ms': (r) => r.timings.duration < 400,
    });
    sleep(1);
}

Running and Analyzing Results

Execute the test using

k6 run smoke-test.js

Sample Output

checks...
βœ” is status 200
βœ” response time < 500ms

If any of the checks fail, K6 will report an error, signaling an issue in the application.

Smoke testing with K6 is an effective way to ensure that key functionalities in your application work as expected. By integrating it into your CI/CD pipeline, you can catch major defects early, improve application stability, and streamline your development workflow.

Golden Feedbacks for Python Sessions 1.0 from last year (2024)

Many Thanks to Shrini for documenting it last year. This serves as a good reference to improve my skills. Hope it will help many.

πŸ“’ What Participants wanted to improve

πŸšΆβ€β™‚οΈ Go a bit slower so that everyone can understand clearly without feeling rushed.


πŸ“š Provide more basics and examples to make learning easier for beginners.


πŸ–₯ Spend the first week explaining programming basics so that newcomers don’t feel lost.


πŸ“Š Teach flowcharting methods to help participants understand the logic behind coding.


πŸ•Ή Try teaching Scratch as an interactive way to introduce programming concepts.


πŸ—“ Offer weekend batches for those who prefer learning on weekends.


πŸ—£ Encourage more conversations so that participants can actively engage in discussions.


πŸ‘₯ Create sub-groups to allow participants to collaborate and support each other.


πŸŽ‰ Get β€œcheerleaders” within the team to make the classes more fun and interactive.


πŸ“’ Increase promotion efforts to reach a wider audience and get more participants.


πŸ” Provide better examples to make concepts easier to grasp.


❓ Conduct more Q&A sessions so participants can ask and clarify their doubts.


πŸŽ™ Ensure that each participant gets a chance to speak and express their thoughts.


πŸ“Ή Showing your face in videos can help in building a more personal connection with the learners.


πŸ† Organize mini-hackathons to provide hands-on experience and encourage practical learning.


πŸ”— Foster more interactions and connections between participants to build a strong learning community.


✍ Encourage participants to write blogs daily to document their learning and share insights.


🎀 Motivate participants to give talks in class and other communities to build confidence.

πŸ“ Other Learnings & Suggestions

πŸ“΅ Avoid creating WhatsApp groups for communication, as the 1024 member limit makes it difficult to manage multiple groups.


βœ‰ Telegram works fine for now, but explore using mailing lists as an alternative for structured discussions.


πŸ”• Mute groups when necessary to prevent unnecessary messages like β€œHi, Hello, Good Morning.”


πŸ“’ Teach participants how to join mailing lists like ChennaiPy and KanchiLUG and guide them on asking questions in forums like Tamil Linux Community.


πŸ“ Show participants how to create a free blog on platforms like dev.to or WordPress to share their learning journey.


πŸ›  Avoid spending too much time explaining everything in-depth, as participants should start coding a small project by the 5th or 6th class.


πŸ“Œ Present topics as solutions to project ideas or real-world problem statements instead of just theory.


πŸ‘€ Encourage using names when addressing people, rather than calling them β€œSir” or β€œMadam,” to maintain an equal and friendly learning environment.


πŸ’Έ Zoom is costly, and since only around 50 people complete the training, consider alternatives like Jitsi or Google Meet for better cost-effectiveness.

Will try to incorporate these learnings in our upcoming sessions.

πŸš€ Let’s make this learning experience engaging, interactive, and impactful! 🎯

A Step-by-Step Guide to LLM Function Calling inΒ Python

Function calling allows Claude to interact with external functions and tools in a structured way. This guide will walk you through implementing function calling with Claude using Python, complete with examples and best practices.

Prerequisites

To get started, you'll need:

  • Python 3.7+
  • anthropic Python package
  • A valid API key from Anthropic

Basic Setup

from anthropic import Anthropic
import json
# Initialize the client
anthropic = Anthropic(api_key='your-api-key')

Defining Functions

function_schema = {
    "name": "get_weather",
    "description": "Get the current weather for a specific location",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "City name or coordinates"
            },
            "unit": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"],
                "description": "Temperature unit"
            }
        },
        "required": ["location"]
    }
}

Making FunctionΒ Calls

A Step-by-Step Guide to LLM Function Calling inΒ Python
Function calling allows Claude to interact with external functions and tools in a structured way. This guide will walk you through implementing function calling with Claude using Python, complete with examples and best practices.
Prerequisites
To get started, you'll need:
Python 3.7+
anthropic Python package
A valid API key from Anthropic

Basic Setup
from anthropic import Anthropic
import json
# Initialize the client
anthropic = Anthropic(api_key='your-api-key')
Defining Functions
function_schema = {
    "name": "get_weather",
    "description": "Get the current weather for a specific location",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "City name or coordinates"
            },
            "unit": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"],
                "description": "Temperature unit"
            }
        },
        "required": ["location"]
    }
}
Making FunctionΒ Calls
def get_weather(location, unit="celsius"):
    # This is a mock implementation but you can all call your API
    return {
        "location": location,
        "temperature": 22 if unit == "celsius" else 72,
        "conditions": "sunny"
    }
def process_function_call(message):
    try:
        # Parse the function call parameters
        params = json.loads(message.content)
        # Call the appropriate function
        if message.name == "get_weather":
            result = get_weather(**params)
            return json.dumps(result)
        else:
            raise ValueError(f"Unknown function: {message.name}")
    except Exception as e:
        return json.dumps({"error": str(e)})
# Example conversation with function calling
messages = [
    {
        "role": "user",
        "content": "What's the weather like in Paris?"
    }
]
while True:
    response = anthropic.messages.create(
        model="claude-3-5-haiku-latest",
        messages=messages,
        tools=[function_schema]
    )
    # Check if Claude wants to call a function
    if response.tool_calls:
        for tool_call in response.tool_calls:
            # Execute the function
            result = process_function_call(tool_call)
            # Add the function result to the conversation
            messages.append({
                "role": "tool",
                "tool_call_id": tool_call.id,
                "name": tool_call.name,
                "content": result
            })
    else:
        # Normal response - print and break
        print(response.content)
        break

Best Practices

  1. Clear Function Descriptions
  • Write detailed descriptions for your functions
  • Specify parameter types and constraints clearly
  • Include examples in the descriptions when helpful
  1. Input Validation
  • Validate all function inputs before processing
  • Return meaningful error messages
  • Handle edge cases gracefully
  1. Response Formatting
  • Return consistent JSON structures
  • Include status indicators in responses
  • Format error messages uniformly

4Β . Security Considerations

  • Validate and sanitize all inputs
  • Implement rate limiting if needed
  • Use appropriate authentication
  • Don't expose sensitive information in function descriptions

Conclusion

Function calling with Claude enables powerful integrations between the language model and external tools. By following these best practices and implementing proper error handling, you can create robust and reliable function-calling implementations.

Learning Notes #71 – pyproject.toml

In the evolving Python ecosystem, pyproject.toml has emerged as a pivotal configuration file, streamlining project management and enhancing interoperability across tools.

In this blog i delve deep into the significance, structure, and usage of pyproject.toml.

What is pyproject.toml?

Introduced in PEP 518, pyproject.toml is a standardized file format designed to specify build system requirements and manage project configurations. Its primary goal is to provide a unified, tool-agnostic approach to project setup, reducing the clutter of multiple configuration files.

Why Use pyproject.toml?

  • Standardization: Offers a consistent way to define project metadata, dependencies, and build tools.
  • Interoperability: Supported by various tools like Poetry, Flit, Black, isort, and even pip.
  • Simplification: Consolidates multiple configuration files (like setup.cfg, requirements.txt) into one.
  • Future-Proofing: As Python evolves, pyproject.toml is becoming the de facto standard for project configurations, ensuring compatibility with future tools and practices.

Structure of pyproject.toml

The pyproject.toml file uses the TOML format, which stands for β€œTom’s Obvious, Minimal Language.” TOML is designed to be easy to read and write while being simple enough for parsing by tools.

1. [build-system]

Defines the build system requirements. Essential for tools like pip to know how to build the project.

[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"

requires: Lists the build dependencies required to build the project. These packages are installed in an isolated environment before the build process starts.

build-backend: Specifies the backend responsible for building the project. Common backends include:

  • setuptools.build_meta (for traditional Python projects)
  • flit_core.buildapi (for projects managed with Flit)
  • poetry.core.masonry.api (for Poetry projects)

2. [tool]

This section is used by third-party tools to store their configuration. Each tool manages its own sub-table under [tool].

Example with Black (Python code formatter):

[tool.black]
line-length = 88
target-version = ["py38"]
include = '\.pyi?$'
exclude = '''
/(
  \.git
  | \.mypy_cache
  | \.venv
  | build
  | dist
)/
'''

  • line-length: Sets the maximum line length for code formatting.
  • target-version: Specifies the Python versions the code should be compatible with.
  • include / exclude: Regular expressions to define which files Black should format.

Example with isort (import sorter)

[tool.isort]
profile = "black"
line_length = 88
multi_line_output = 3
include_trailing_comma = true

  • profile: Allows easy integration with formatting tools like Black.
  • multi_line_output: Controls how imports are wrapped.
  • include_trailing_comma: Ensures trailing commas in multi-line imports.

3. [project]

Introduced in PEP 621, this section standardizes project metadata, reducing reliance on setup.py.

[project]
name = "my-awesome-project"
version = "0.1.0"
description = "An awesome Python project"
readme = "README.md"
requires-python = ">=3.8"
authors = [
    { name="Syed Jafer K", email="syed@example.com" }
]
dependencies = [
    "requests>=2.25.1",
    "fastapi"
]
license = { file = "LICENSE" }
keywords = ["python", "awesome", "project"]
classifiers = [
    "Programming Language :: Python :: 3",
    "License :: OSI Approved :: MIT License",
    "Operating System :: OS Independent"
]

  • name, version, description: Basic project metadata.
  • readme: Path to the README file.
  • requires-python: Specifies compatible Python versions.
  • authors: List of project authors.
  • dependencies: Project dependencies.
  • license: Specifies the project’s license.
  • keywords: Helps with project discovery in package repositories.
  • classifiers: Provides metadata for tools like PyPI to categorize the project.

4. Optional scripts and entry-points

Define CLI commands:

[project.scripts]
mycli = "my_module:main"

  • scripts: Maps command-line scripts to Python functions, allowing users to run mycli directly after installing the package.

Tools That Support pyproject.toml

  • Build tools: Poetry, Flit, setuptools
  • Linters/Formatters: Black, isort, Ruff
  • Test frameworks: Pytest (via addopts)
  • Package managers: Pip (PEP 517/518 compliant)
  • Documentation tools: Sphinx

Migration Tips

  • Gradual Migration: Move one configuration at a time to avoid breaking changes.
  • Backwards Compatibility: Keep older config files during transition if needed.
  • Testing: Use CI pipelines to ensure the new configuration doesn’t break the build.

Troubleshooting Common Issues

  1. Build Failures with Pip: Ensure build-system.requires includes all necessary build tools.
  2. Incompatible Tools: Check for the latest versions of tools to ensure pyproject.toml support.
  3. Configuration Errors: Validate your TOML file with online validators like TOML Lint.

Further Reading:

πŸ“’ Python Learning 2.0 in Tamil – Call for Participants! πŸš€

After an incredible year of Python learning Watch our journey here, we’re back with an all new approach for 2025!

If you haven’t subscribed to our channel, don’t miss to do it ? Support Us by subscribing

This time, we’re shifting gears from theory to practice with mini projects that will help you build real-world solutions. Study materials will be shared beforehand, and you’ll work hands-on to solve practical problems building actual projects that showcase your skills.

πŸ”‘ What’s New?

βœ… Real-world mini projects
βœ… Task-based shortlisting process
βœ… Limited seats for focused learning
βœ… Dedicated WhatsApp group for discussions & mentorship
βœ… Live streaming of sessions for wider participation
βœ… Study materials, quizzes, surprise gifts, and more!

πŸ“‹ How to Join?

  1. Fill the below RSVP – Open for 20 days (till – March 2) only!
  2. After RSVP closes, shortlisted participants will receive tasks via email.
  3. Complete the tasks to get shortlisted.
  4. Selected students will be added to an exclusive WhatsApp group for intensive training.
  5. It’s a COST-FREE learning. We require your time, effort and support.
  6. Course start date will be announced after RSVP.

πŸ“œ RSVP Form

☎ How to Contact for Queries ?

If you have any queries, feel free to message in whatsapp, telegram, signal on this number 9176409201.

You can also mail me at learnwithjafer@gmail.com

Follow us for more oppurtunities/updates and more…

Don’t miss this chance to level up your Python skills Cost Free with hands-on projects and exciting rewards! RSVP now and be part of Python Learning 2.0! πŸš€

Our Previous Monthly meets – https://www.youtube.com/watch?v=cPtyuSzeaa8&list=PLiutOxBS1MizPGGcdfXF61WP5pNUYvxUl&pp=gAQB

Our Previous Sessions,

Postgres – https://www.youtube.com/watch?v=04pE5bK2-VA&list=PLiutOxBS1Miy3PPwxuvlGRpmNo724mAlt&pp=gAQB

Python – https://www.youtube.com/watch?v=lQquVptFreE&list=PLiutOxBS1Mizte0ehfMrRKHSIQcCImwHL&pp=gAQB

Docker – https://www.youtube.com/watch?v=nXgUBanjZP8&list=PLiutOxBS1Mizi9IRQM-N3BFWXJkb-hQ4U&pp=gAQB

Note: If you wish to support me for this initiative please share this with your friends, students and those who are in need.

❌