Create an AI Foundry Agent with Python tools

It looks like everyone is creating agents nowadays. Most of the time with elaborate prompts to tell a language model what it should do. Great, but we all know a language model isn’t good at doing everything. Also, I don’t want it to do everything either as it would need to be granted access to every possible resource in my environment. To extend the capabilities of an agent (and the underlying language model), you can provide tools. Read more →

How I used Semantic Kernel Agents and Python to tune my resume

In an earlier post I wrote about using Semantic Kernel to create an Agentic AI solution, all using C#. Of course, similar flows can be created with Python. To try this, I’ve created a sample solution to update a resume so it’s more likely to pass the ATS requirements used by various companies nowadays. My sample is heavilly inspired by Gian Paolo Santopaolo his CV-Pilot repository, which I was not able to use due to the CrewAI tooling phoning home and my DNS (PiHole) blocking those requests. Read more →

Create an Agentic AI solution with Semantic Kernel

We are finally at a state in the GenAI-space where we can create agentic AI solutions with ease. I’m most familiar with Semantic Kernel, when working with LLMs, and this library works great for creating these solutions. In a nutshell, what you need to do is create a group chat, add your agents to it, and then let them work together to solve a problem. Do keep in mind, at the time of this writing, version 1. Read more →

Add MCP Server to search repository content using VS Code

I’m very happy GitHub Copilot exists and lately with the Agent-mode it’s even better. It’s making sure I can focus on the relevant pieces of my solutions and not have to worry too much about the plumbing part. The models it’s using are quite powerful and contains a lot of (old) data. When using new libraries or versions of already existing libraries, the LLMs used under the hood often don’t provide useful suggestions or edits. Read more →

Create an AI Assistant with your own data

The current large language models, like GPT-4, GPT-4 Turbo and GPT-4o are great when you need some output generated based on data you feed in the prompt. Even the small language models, like Phi-3, are doing a great job at this. However, these models often don’t know a lot about the data within your company. Because of this, they can’t do a good job at answering questions that required data from your organization. Read more →