How I used Semantic Kernel Agents and Python to tune my resume

In an earlier post I wrote about using Semantic Kernel to create an Agentic AI solution, all using C#. Of course, similar flows can be created with Python. To try this, I’ve created a sample solution to update a resume so it’s more likely to pass the ATS requirements used by various companies nowadays.

My sample is heavilly inspired by Gian Paolo Santopaolo his CV-Pilot repository, which I was not able to use due to the CrewAI tooling phoning home and my DNS (PiHole) blocking those requests. Even after disabling the tracking features, the library/ies still tracked ‘something’, which caused the logic to break so I decided to create something myself using Semantic Kernel.

What did I create?

A tool/flow to update a resume so it can pass the ATS (Applicant Tracking System), based on the job description. These systems ofen check for specific wording and skills. For a human, it takes quite a bit of time to (re)write a resume to pass the ATS requirements. It’s the perfect job for a LLM to perform, as it’s built to create text based on other text input.

You can try doing this in a single prompt, but there’s a high probability this won’t perform the way you like. In my sample, I’ve created a ‘Project Manager’-, ‘Job Market Analyst’- and ‘Strategist’-agent. Each with their own specific job and goals.

Create the agents

To create the agents you need to create the different objects with the appropriate details. The sample below shows what I have used.

def get_agents() -> list[Agent]:
    """Return a list of agents that will participate in the group style discussion."""
    projectmanager = ChatCompletionAgent(
        name="ProjectManager",
        description=PROJECT_MANAGER_DESCRIPTION,
        instructions=PROJECT_MANAGER_INSTRUCTIONS,
        service=AzureChatCompletion(),
    )
    jobmarketanalyst = ChatCompletionAgent(
        name="JobMarketAnalyst",
        description=JOB_MARKET_ANALYST_DESCRIPTION,
        instructions=JOB_MARKET_ANALYST_INSTRUCTIONS,
        service=AzureChatCompletion(),
    )
    strategist = ChatCompletionAgent(
        name="Strategist",
        description=STRATEGIST_DESCRIPTION,
        instructions=STRATEGIST_INSTRUCTIONS,
        service=AzureChatCompletion(),
    )
    # The order of the agents in the list will be the order in which they will be picked by the round robin manager
    return [projectmanager, jobmarketanalyst, strategist]

Create a group chat and let it run

Again, for this to work, you create a group chat and add the necessary agents to it.
The chat will try seven times to create the best possible resume, based on the original and provided job description.

 # 1. Create a group chat orchestration with a round robin manager
agents = get_agents()
group_chat_orchestration = GroupChatOrchestration(
    members=agents,
    # max_rounds is odd, so that the project manager gets the last round
    manager=RoundRobinGroupChatManager(max_rounds=7),
    agent_response_callback=agent_response_callback,
)

original_resume, job_description = load_files(args.resume, args.jobdesc)

# 2. Create a runtime and start it
runtime = InProcessRuntime()
runtime.start()

# 3. Invoke the orchestration with a task and the runtime
task = get_instructions(job_description, original_resume)
orchestration_result = await group_chat_orchestration.invoke(
    task=task,
    runtime=runtime,
)

# 4. Wait for the results
value = await orchestration_result.get()
result_entry = f"***** Result *****\n{value}\n"
print(result_entry)
conversation_log.append(result_entry)

Just like with the C# example, from my previous post, it is entirely possible to invoke Python functions during this chat. This is also used in Gian’s original tool via Crew AI plugins.
I have not had the need to re-implement this in my sample, as the output is already quite good.

In closing

I quite like these approaches where multiple agents can be added to a conversation and focus on a single task. The orchestrator (and LLM) will be able to figure out which agent to use. Especially when you need invoke an API, or compute something, this is great as you can use the LLM for the stuff a language model is good at and regular code for all the other stuff. It makes solutions quite extensible and powerful.

You can find the complete sample & corresponding readme in my GitHub repository.

Needless to say, do check the output of text created by a LLM. It might add nonsense or interpret text the wrong way. As always, you are in control and should own the created text. Don’t blindly trust text created by language models or you might be in for a surprise.


Share