A day at the BeConnected Day 13

🚀 Satisfied with how it went at #beconnectedday, I really enjoyed it.
🔍 Igor Macori and I explained the Search on Microsoft 365 from two different angles (full-text and AI-based), and the session was enjoyed by many people.

Click the link below to access the slides:

Copilot Studio announcements at Microsoft Build 2025

In this post, and in the above-linked video,  I’ll give you an overview of all the new features of Copilot Studio announced during the just ended Microsoft Build 2025 conference, broken down by macro categories: multi-agent support, models, knowledge, tools, analytics, publishing, application lifecycle management.

Multi Agents

Multi-Agent Orchestration

Rather than relying on a single agent to do everything—or managing disconnected agents in silos—organizations can now build multi-agent systems in Copilot Studio, where agents delegate tasks to one another.

In the demo showed in my video, we have a banking agent that helps customers with their banking needs (for example checking account balances, transferring funds, report a stolen card and so on): previously you would have to build a single agent with all of these capabilities, now instead you can break a complex agent down into many connected agents each one specialized in a single functionality.

Adding a new agent is very easy: you can add an agent from Copilot Studio or the Microsoft 365 SDK, Microsoft Fabric, Azure AI Foundry. And in the future you’ll be able to connect to third party agents, via the A2A protocol.

Multilingual capability for Generative Orchestrator

Microsoft now provides a catalog of managed agents you can browse and install from within Copilot Studio. These agents are complete solutions, that you can use as template and customize for your needs.

Models

Copilot Tuning

A feature that was long-waited is Copilot Tuning. Copilot Tuning allows you to fine-tune large language models (LLMs) by using your own data. That’s implement in a task-specific fashion, let’s say in a controlled way, let’s see an example.

The first step is configuring your model. Click create new. Next, you’ll provide the model name, a description of the task you’d like to accomplish, and select a customization recipe tailored to the specific task type.

Next, you’ll give the model instructions to help it identify and prepare the most relevant data from your SharePoint sites.

Next, you need to provide the training data or knowledge, which forms the foundation of your fine tuned model. Currently only SharePoint sources are supported.

The final step in configuring is to define who can use the fine-tuned model to build agents in Microsoft 365 copilot by using security groups.

Now that your model is configured, you’re ready to prepare your training data with data labeling. Data labeling is the process of identifying the best examples that you want the model to learn from.

Once your data are processed, you’ll receive an email notification indicating that your data is ready for labelling.

The model you have fine-tuned can be used in M365 Copilot Agent Builder. So from the new M365 Copilot interface you select Create Agent, and you’ll be prompted to select the purpose of your agent: general purpose or task-specific. Select task specific to see the list of fine-tuned models that are available to you. You select a model, then from now on you proceed to building and customize your agent as usual.

Bring Your Own Model as a primary response model

We are now offered the possibility to fine-tune the LLM model used by Copilot Studio while building our agents, in two different ways: at agent level and at tool level. Let’s start with the agent level.

Once you have your agent initialized, go to the settings, in the generative AI tab, you have now a drop down to change the primary response model: you have some preset options plus the possibility to connect to AI Foundry and select your own published models from AI Foundry.

Bring Your Own Model as a primary response model

The second way how you can introduce a fine-tuned model in our Copilot Studio agents is via the prompt tool.

The prompt tool allows you to specify a task to be completed by Copilot Studio, describing it in natural language, and copilot studio will call it when it reckons necessary.

Now you have the possibility to specify a model for your prompt. You have some of the managed models already available for you, the ones that are curated by Microsoft. In addition it’s also possible to use one of 1900 plus Azure AI Foundry models based on your specific  use case.

Knowledge

SharePoint lists, Knowledge Instructions

Copilot Studio is making progress on the Knowledge management as well. Now it supports SharePoint Lists, as well as uploading files grouping them together as a single knowledge base. Plus, now you have the option to write Instructions at knowledge level.

Tools

Computer Use

I think Computer Use is by far the most impressive tool added to Copilot Studio. Unfortunately it’s going to be available only for big customers in USA, at least for now.

Computer Use allows Copilot Studio Agents to interact with desktop apps and websites like a person would—clicking buttons, navigating menus, typing in fields, and adapting automatically as the interface changes. This opens the door to automating complex, user interface (UI)-based tasks like data entry, invoice processing, with built-in reasoning and full visibility into every step.

Dataverse Functions

You have also Dataverse Functions in preview, you can create one from the Power Apps portal, the function can have inputs and outputs and a formula containing your business logic: and then you can add that function to your agent selecting the Dataverse connector and choosing Unbounded Action.

You can configure it with the appropriate inputs and outputs, and then that becomes one more tool at your agent disposal.

Intelligent Approvals in Agent Flows

Agent Flows is a new tool we have been seeing for few weeks now, Microsoft is actively working on it and at the Build Conference they presented Intelligent Approvals.

Intelligent Approvals inserts an AI-powered decision-making stage directly within the Advanced Approval action. You simply provide natural language business rules and select your desired AI model: the model then evaluates submitted materials—images, documents, databases, or knowledge articles—to deliver a transparent approve or reject decision, complete with a detailed rationale.

Analytics

Evaluation Capabilities

The challenge in building any kind of agent is making sure it responds accurately when users ask different types of questions.

This is where the new evaluation capabilities in Copilot Studio come in. Now you can run automated tests against your agent directly from the testing panel. You can upload your set of questions, import queries from the chat history or even generate questions using AI. You can review and edit each question before running the test. Then you can run the evaluation and get a visual summary of the evaluation results.

Publishing

Publishing to WhatsApp and SharePoint

You can now publish your agent to WhatsApp and, more importantly, you can publish it to SharePoint! That’s another long-waited feature, because so far it wasn’t possible to have a SharePoint Agent with actions and other advanced features, now finally you can.

Let me just point out here that if you create your SharePoint Agent from SharePoint, you can’t customize it in Copilot Studio yet. So this works only if you start from Copilot Studio and then publish to SharePoint, the vice versa is not possible yet.

Code Interpreter

Generate a chart via Python code

Copilot Studio agents can now generate charts, and that’s done using the new Code Interpreter feature. Python code is generated automatically in reply to a prompt, you can see it and reuse it, and then it executes and generates the chart as the user’s answer.

ALM

Source code integration

With native source control integration you can take your agents in your environment and connect it to a source control repository, such as Azure DevOps, and make commits from the UI directly, so that everything you do is source controlled and is managed in the same way that you would expect any software to be managed.

Edit agent in VS Code

And finally, for the real nerds, the extension to Visual Studio Code allows you to clone agents to your machine locally and start editing the code behind it!

You’ll get here syntax errors highlighting, auto complete, documentation and so forth.

Copilot Studio + Google Search + MCP = Turbocharged AI agents

In my video below we’ll look at something that’s currently still unseen: we’re going to use the MCP SDK for C# and .NET to build an MCP server that leverages Google search, and we’ll exploit it in an AI agent created with Copilot Studio!

Model Context Protocol (MCP): Everything you need to know

Introduction

Model Context Protocol, MCP, the new open-source standard that organizes how AI agents interact with external data and systems.

In this post we will see how MCP works, what it is, how it is applied and what its current limitations are. Watch my video here above for a further practical example of an MCP server coded in C# and .NET.

The problem

So, what problem is MCP trying to solve here?

Let’s imagine we want to connect four smartphones to our computer to access their data: until some time ago, I would have needed four different cables with different connectors and maybe even four USB ports.

The solution

Then the USB-C standard came along and sorted everything out: now I only need one type of cable, one type of port, one charger, and I can connect and charge phones of any brand.

So, think of MCP as the USB-C for AI agents:

MCP standardizes the way AI agents can connect to data, tools, and external systems. The idea, which isn’t all that original, is to replace the plethora of custom integration methods—each API or data source having its own custom access method—with a single, consistent interface.

According to MCP specifications, an MCP server exposes standard operations, lists what it offers in a standardized manner, and can perform an action when an AI agent, the MCP client, requests it. On the client side, the AI agent reads what the server has to offer, understands the description, parameters, and the meaning of the parameters, and thus knows if and when it is useful to call the server and how to call it.

Here, if you think about it, it’s really a stroke of genius—simple yet powerful. On one side, I have a standard interface, and on the other side, I have an LLM that learns the server’s intents from the standard interface and thus understands if and when to use it. And all of this can even work automatically, without human intervention.

The MCP protocol

So, let’s get a bit practical. MCP helps us standardize three things, fundamentally:

  • Resources.
  • Tools.
  • Prompts.

Resources

An MCP server can provide resources to calling agents, and by resources, we mean context or knowledge base. For example, we can have an MCP server that encapsulates your database or a set of documents, and any AI agent can query your MCP server and obtain documents relevant to a prompt requested by its user. The brilliance here lies in having completely decoupled everything, meaning the MCP server has no knowledge of who is calling it, and AI agents use MCP servers without having hardcoded links or parameters in their own code.

Tools

Tools are simply functions that the MCP server exposes on its interface, nothing more, nothing less, and that AI agents can call.

Prompts

Prompts, finally, allow MCP servers to define reusable prompt templates and workflows, which AI agents can then present to users, as well as to the internal LLMs of the agent itself, to interact with the MCP server.

MCP marketplaces

The MCP standard was proposed by Anthropic last November, and within a few months, it has literally taken off. There are already numerous implementations of MCP servers scattered across the internet, covering practically everything. So, you can really go wild creating all sorts of AI agents simply by integrating these servers into your projects.

To facilitate the search for these MCP servers, some marketplaces have also emerged, and I mention the most important ones in this slide. I’ve counted five so far:

I would say the most comprehensive ones at the moment are the first two, MCP.so and MCPServers.org. However, it’s possible that Anthropic might decide to release its official marketplace in the near future.

Areas of improvement

We have seen how MCP is a very promising standard, but what are its weaknesses? If there are any. Well, there are a few at the moment, but that’s understandable since it’s a fairly young standard. Very young, I would say. Currently, the biggest limitation is the lack of a standard authentication and authorization mechanism between client and server.

Work is being done on it. The idea is to integrate OAuth 2.0 and 2.1. Another significant shortcoming is that there is currently no proper discovery protocol for the various MCP servers scattered across the network. Yes, we’ve seen that there are marketplaces, but if I wanted to allow my AI agent to perform discovery completely autonomously, to find the tools it needs on its own, well, that’s not possible yet.

We know that Anthropic is working on it. There will be a global registry sooner or later, and when it finally becomes available, we will definitely see another significant boost in the adoption of this protocol. Additionally, the ability to do tracing and debugging is missing, and that’s no small matter. Imagine, for example, that our AI agent calling these MCP servers encounters an error or something doesn’t go as expected:

What do we do? Currently, from the caller’s perspective, MCP servers are black boxes. If something goes wrong, it’s impossible for us to understand what’s happening. There’s also no standard for sequential calls and for resuming a workflow that might have been interrupted halfway due to an error.

For example, I have an AI agent that needs to make 10 calls to various MCP tools, and I encounter an error on the fifth call. What do I do? The management of the retry resume state is entirely the client’s responsibility, and there is no standard; everyone implements it in their own way. So, MCP is still young and has significant limitations. However, it is developed and promoted by Anthropic, has been well-received by the community, adopted by Microsoft, and also by Google, albeit with some reservations in this case.

Conclusions

So, I would say that the potential to become a de facto standard is definitely there, and it’s certainly worth spending time to study and adopt it in our AI agents.

Subscribe to my blog and YouTube channel for more ☺️

My new YouTube channel

✍ This was something I wanted to do for a long time, I finally made up my mind: I opened my own YouTube channel.

💻 I’ll talk about my work stuff, however the initial focus is on AI Agents, a topic that fascinates me in a special way.

Here below is the link to my first video (the only available, currently): consider it a pilot, there is still a long way to go ☺️

🆘 Let me know what you liked and what you didn’t like (especially what you did NOT like).

Goodbye LLM? Meta revolutionises AI with Large Concept Models!

In recent years, Large Language Models (LLM) have dominated the field of generative artificial intelligence. However, new limitations and challenges are emerging that require an innovative approach. Meta has recently introduced a new architecture called Large Concept Models (LCM), which promises to overcome these limitations and revolutionise the way AI processes and generates content.

In recent years, Large Language Models (LLM) have dominated the field of generative artificial intelligence. However, new limitations and challenges are emerging that require an innovative approach. Meta has recently introduced a new architecture called Large Concept Models (LCM), which promises to overcome these limitations and revolutionise the way AI processes and generates content.

Limitations of LLMs

LLMs, such as ChatGPT, Claude, Gemini etc. need huge amounts of data for training and consume a significant amount of energy. Furthermore, their ability to scale is limited by the availability of new data and increasing computational complexity. These models operate at the token level, which means they process input and generate output based on single word parts, making reasoning at more abstract levels difficult.

Introduction to Large Concept Models (LCM)

Large Concept Models represent a new paradigm in the architecture of AI models: instead of working on the level of tokens, LCMs work on the level of concepts. This approach is inspired by the way we humans process information, working on different levels of abstraction and concepts rather than single words.

How LCMs work

LCMs use an embedding model called SONAR, which supports up to 200 languages and can process both text and audio. SONAR transforms sentences and speech into vectors representing abstract concepts. These concepts are independent of language and mode, allowing for greater flexibility and generalisation capabilities.

Advantages of LCMs

Multi-modality and Multilingualism

LCMs are language and mode agnostic, which means they can process and generate content in different languages and formats (text, audio, images, video) without the need for re-training. This makes them extremely versatile and powerful.

Computational Efficiency

Since LCMs operate at the concept level, they can handle very long inputs and outputs more efficiently than LLMs. This significantly reduces energy consumption and the need for computational resources.

Zero-Shot generalisation

LCMs show an unprecedented zero-shot generalisation capability, being able to perform new tasks without the need for specific training examples. This makes them extremely adaptable to new contexts and applications.

Challenges and Future Perspectives

Despite promising results, LCMs still present some challenges. Sentence prediction is more complex than token prediction, and there is more ambiguity in determining the next sentence in a long context. However, continued research and optimisation of these architectures could lead to further improvements and innovative applications.

Conclusions

Large Concept Models represent a significant step forward in the field of artificial intelligence. With their ability to operate at the concept level, multimodality and multilingualism, and increased computational efficiency, LCMs have the potential to revolutionise the way AI processes and generates content. It will be interesting to see how this technology will develop and what new possibilities it will open up in the future of AI.

Messing around with SharePoint Agents

A customer asked me a question about SharePoint Agents that I was unable to answer. Having then realised that perhaps SharePoint Agents are less trivial than I thought, I decided to take the question head-on, doing some tests to see if there was an answer that makes sense.

A few days ago I wrote an article on the Copilot Agents (you can find it here), and as you can see from reading it, I relegated the SharePoint Agents to the end, giving them just a standard paragraph that in truth adds nothing to what we have already known for a while.

But then it happened that during a demo the other day, a customer asked me a question about SharePoint Agents that I was unable to answer. Having then realised that perhaps SharePoint Agents are less trivial than I thought, I decided to take the question head-on that afternoon, doing some tests to see if there was an answer that makes sense.

This article is the result of those thoughts, and assumes a basic knowledge of SharePoint Agents.

The question

The customer’s question was: ‘Having one agent per SharePoint site seems excessive and unmanageable to me, how can I instead create my own “official” agent once, and make it the default agent for all SharePoint sites?’.

Let’s try to give an answer

I created a test site, called “Test Donald“:

The site collection has its own default SharePoint Agent, having the same name as the site. This default agent does not have a corresponding .agent file in the site. Nor is there an option to edit the default agent. As we already know, however, I can create more agents, therefore I created a second one:

The new agent can be created directly from the menu, or by selecting a library or documents in a library:

(there must be at least 1 document in the library, otherwise the ‘Create an agent’ button won’t appear).

Please note that it is not (yet) possible to customise a SharePoint Agent in Copilot Studio:

A SharePoint Agent published on one site can also be based on knowledge from other SharePoint sites, but it’s important to bear in mind that only a maximum of 20 knowledge sources can be added:

The Edit popup shows the location of the saved agent:

Navigating the link will lead to the location of the associated .agent file:

The new agent thus created is Personal and as such only accessible by the user who created it. When the site owner approves it, it becomes Published (Approved) at the site level and therefore accessible to the other (licensed) users of the site:

Once the agent has been approved, the relevant file is physically moved automatically by SharePoint to Site Assets > Copilots > Approved:

The newly approved agent can now be set as the site’s default agent:

There can only be one default agent for any given site:

Back to the question, then: can I configure a SharePoint Agent once and then have it as the site default agent on all sites?

To answer the question, I have created a second site collection called ‘Test Donald 2’, thus a Documents Agent 2, which has both sites (Test Donald and Test Donald 2) as sources:

I then saved it, approved it, and set it as the default for Test Donald 2:

The next step then was to copy the relevant .agent file from Test Donald 2 to Test Donald:

The agent just copied appears correctly in the list as an approved agent on the Test Donald site:

And it is also possible to select it and set it as site default agent:

Conclusions

The answer then is Yes, you can have a default agent that is always the same on all SharePoint sites, provided you accept the following limitations:

  • 20-source limitation (inherent limitation of SharePoint Agents, at least for now).
  • Customisation in Copilot Studio not yet available.
  • Manual copying of the .agent file and manual approval as default agent.

The copying of the .agent file could be automated with a Power Automate flow associated with a provisioning process. However, approving it as the default agent currently is not possible via API.

An introduction to Microsoft 365 Copilot Agents

In this article, I will provide an overview of Copilot Agents, explaining what they are, which out-of-the-box agents Microsoft has released to date, what types of agents we can customize, and I will also explain what autonomous agents and SharePoint Agents are.

Creating, extending and customising Copilot agents is a fundamental process for adapting functionality to specific business or personal needs.

In this article, I will provide an overview of Copilot Agents, explaining what they are, which out-of-the-box agents Microsoft has released to date, what types of agents we can customize, and I will also explain what autonomous agents and SharePoint Agents are.

What are Copilot Agents?

Agents are scoped versions of Microsoft 365 Copilot that act as AI assistants to automate and run business processes. They enable customers to bring custom knowledge, skills, and process automation into Microsoft 365 Copilot for their specific needs. 

Pre-built Copilot agents

Microsoft has launched a wide range of pre-built agents that span multiple business functions. These agents can be deployed immediately or further configured by incorporating your organisation’s knowledge and skills.

At the time I’m writing they are all in preview. A brief description of the available pre-built agents is provided here below:

  • Website Q&A: Answers common questions from users using the content on your website.
  • Team Navigator: Assists employees in finding colleagues and their hierarchy within the organisation.
  • IT Helpdesk: Empowers employees to resolve issues and create/view support tickets.
  • Store Operations: Improves the efficiency of retail frontline workers by enabling easy access to store procedures and policies.
  • Case Management: Provides around-the-clock automated support to customers by understanding their issues and creating cases.
  • Safe Travels: Provides answers to common travel questions and related health and safety guidelines.
  • Inclusivity: Helps employees to have a safe place to ask questions and to learn how to activate inclusivity in a modern and diverse workforce.
  • Sustainability Insights: Enables users to easily get insights and data about a company’s sustainability goals and progress.
  • Weather: Gets the current weather conditions and forecast.
  • Benefits: Provides personalized information to your employees on benefits offered to them.
  • Citizen Services: Enables Public Sector Organizations to assist their citizens with information about services available to them.
  • Financial Insights: Helps financial services professionals get information from their organization’s financial documents.
  • Self-Help: Enables customer service agents to resolve issues faster.
  • Awards and Recognition: Streamlines the process of nominating and recognizing your employees for their contributions and achievements.
  • Leave Management: Streamlines the leave request and time-off process for your employees.
  • Wellness Check: Conducts automated wellness checks to gauge employee morale.
  • Sales Qualification: Enables sellers to focus their time on the highest priority sales opportunities.
  • Sales Order: Automates the order intake process from entry to confirmation by interacting with customers and capturing their preferences.
  • Supplier Communications: Autonomously manages collaboration with suppliers to confirm order delivery, while helping to prevent potential delays.
  • Finance Reconciliation: Helps teams prepare and cleanse data sets to simplify and reduce time spent on the financial period close process.
  • Account Reconciliation: Automates the matching and clearing of transactions between subledgers and the general ledger, helping accountants and controllers speed up the financial close process.
  • Time and Expense: Autonomously manages time entry, expense tracking, and approval workflows.
  • Customer Intent: Enables evergreen self-service by continuously discovering new intents from past and current customer conversations across all channels, mapping issues and corresponding resolutions maintained by the agent in a library.
  • Customer Knowledge Management: Helps ensure knowledge articles are kept perpetually up to date by analysing case notes, transcripts, summaries, and other artifacts from human-assisted cases to uncover insights.
  • Scheduling Operations: Enables dispatchers to provide optimized schedules for technicians, even as conditions change throughout the workday.

How do I create a Copilot Agent?

Before diving into how we can extend Copilot, it is essential to understand two concepts:

  • the anatomy of Microsoft 365 Copilot,
  • the two main types of agents that we can create: Declarative Agents and Custom Engine Agents.

Anatomy of Microsoft 365 Copilot

Foundation Models

Foundation models are large language models (LLMs) that form the core of Copilot’s capabilities. These models, such as GPT-4, are trained on vast amounts of data and use deep learning techniques to understand, summarize, predict, and generate content. They provide the underlying AI intelligence that powers Copilot’s functionality.

User Experience

The user experience component focuses on how users interact with Copilot within Microsoft 365 apps like Word, Excel, PowerPoint, Outlook, and Teams. It ensures that the integration is seamless and intuitive, allowing users to leverage Copilot’s capabilities without disrupting their workflow. This includes features like drafting, summarizing, and answering questions in the context of the user’s work.

Orchestrator

The orchestrator is responsible for coordinating the various components of Copilot and ensuring that they work together harmoniously. It manages the flow of information between the foundation models, user data, and the Microsoft 365 apps, ensuring that responses are accurate and relevant to the user’s context.

Knowledge

The knowledge component involves the integration of Microsoft Graph, which includes information about users, activities, and organizational data. This allows Copilot to access and utilize relevant data from emails, chats, documents, and meetings to provide contextually appropriate responses and insights.

Skills

Skills refer to the specific capabilities and functionalities that Copilot offers within different Microsoft 365 apps. For example, in Word, Copilot can help users create, understand, and edit documents; in Excel, it can assist with data analysis and visualization; and in Teams, it can facilitate communication and collaboration.

Declarative Agents

Declarative agents are designed to be configured through predefined rules and well-defined scenarios. These agents function on the basis of declarations of intent and specific conditions that guide their behaviour.

  • Ease of Use: They do not require in-depth programming knowledge.
  • Speed of Implementation: They can be set up quickly using intuitive graphical interfaces.
  • Limitations: They might be less flexible when it comes to handling complex scenarios outside the preset rules.

Custom Engine Agents

Custom engine agents offer the highest level of control and customisation.

  • Flexibility: They allow detailed customisation by writing code and integrating advanced algorithms.
  • Power: They can handle complex and dynamic scenarios requiring sophisticated and adaptive logic.
  • Development Effort: They require more technical expertise and more time for development and maintenance.

Blue pill or pink pill…?

When it comes to extending Copilot, you can decide to swallow the blue pill or swallow the pink pill…

Staying on the Blue side of things, you’re going to reuse the foundations of Copilot: you’re going to reuse the UI, you’re going to reuse the templates and also the Microsoft Orchestrator. You’re going to add Knowledge and Skills instead.

On the other hand, if you swallow the pink pill, you are going to completely replace the Copilot engine with your own custom.

Swallowing the blue pill: creating a Declarative Agent

Declarative agents are a collection of custom knowledge and custom skills hosted on the Microsoft 365 Copilot orchestrator and foundation models.

You can add knowledge to Declarative Agents via connectors, and skills via plugins.

Adding Knowledge

Graph Connectors

Microsoft Graph Connectors provide a platform to ingest your unstructured data into Microsoft Graph, so that Copilot can reason over the entirety of your enterprise content. Microsoft Graph Connectors existed before Copilot, in fact it powers other Microsoft 365 experiences, like Microsoft Search.

Power Platform Connectors

Power Platform Connectors are essentially API wrappers that allow Copilot Agents to interact with various external services and applications. These connectors enable Copilot to perform a wide range of tasks by connecting to services both within the Microsoft ecosystem (like Office 365, SharePoint, Dynamics 365) and outside it (such as Twitter, Google services, Salesforce). There are three main types of connectors:

  1. Standard Connectors: These are included with all Copilot Studio plans and cover common services in Copilot Studio.
  2. Premium Connectors: Available in select Copilot Studio plans, these offer more advanced functionalities in Copilot Studio.
  3. Custom Connectors: These allow you to connect to any publicly available API for services not covered by existing connectors in Copilot Studio.

By leveraging these connectors, Copilot Agents can access and utilise data from various sources, enhancing their capabilities and making them more dynamic and responsive to specific business needs.

Adding Skills

Plugins

Plugins don’t ingest data, they look directly in real-time at the external systems, they can also interact with the external systems, that is not only read data but also write data.

There are 3 types of plugin:

  • Copilot Studio Actions: plugins that extend the functionality of Microsoft 365 Copilot, allowing users to perform specific tasks, retrieve data, and interact with external systems. By leveraging a low-code interface, Copilot Studio makes it accessible for users without extensive technical knowledge to create and manage these actions.
  • Message Extensions: they are the well known search and action capability for Microsoft Teams, that now work also as plugins for Copilot agents.
  • API Plugins: they enable declarative agents in Microsoft 365 Copilot to interact with REST APIs that have an OpenAPI description.

The pink pill: Custom Engine Agents

Custom engine agents are developed using custom foundation models and orchestrators and can be tailored to specific enterprise needs with your own stack. Custom engine agents currently work as standalone Teams apps.

These are an evolution of Microsoft Teams bots and, just like before, you can use Teams AI Library and Teams Toolkit to create them. I intend to come back to Custom Engine Agents with a dedicated article.

Declarative? or Custom Engine?

When to create a declarative agent:

  • You want to take advantage of Copilot’s model and orchestrator.
  • You have external data that you want to make available to Copilot to reason over via a Microsoft Graph connector.
  • You have an existing API that could be used as an API plugin for read and write access to real-time data.
  • You have an existing Teams message extension that you can use as a plugin.

When to build a custom agent:

  • You want to use specific models for your service.
  • You need agentic AI support.
  • You want your service to be independent from Microsoft 365 Copilot, accessible to all Microsoft 365 users regardless of their Copilot licensing status.

Autonomous Agents

An autonomous agent is an AI system that can perform complex tasks independently, that is without direct human intervention.

Let’s see a typical example of an autonomous agent created with Copilot: a prospective client sends you an email requesting an engagement. As soon as the email is received, the agent gets to work, extracting the relevant details.

It then follows a series of steps, including verifying any previous engagement with this customer, validating the industry sector, summarising the client needs, it then writes and sends an email to the relevant expert in your organisation, with all the client details.

The particular feature that makes a Copilot Agent an autonomous agent is the fact that it can activate by itself: this is possible thanks to the event triggers, defined in Copilot Studio, which may kind of remind you the Power Automate triggers.

SharePoint Agents

SharePoint Agents are a specialised type of agents, tailored on SharePoint Online and that can be created outside of Copilot Studio. Every SharePoint site includes an agent scoped for the site’s content, ready to assist you instantly. These ready-made agents are an easy starting point to get answers without combing through the site or digging around with search—they can be used immediately without any customization.

For specific projects or tasks, any SharePoint user can create a customized agent based on the relevant files, folders, or sites, with just one click. 

The SharePoint Agents can easily be shared via email or within Teams chats. Not only are coworkers able to use the agent that you shared, but @mentioning the agent in a group chat setting gives the team a subject matter expert ready to assist and facilitate collaboration.  

They adhere to existing SharePoint user permissions, they don’t broadly share the files you selected whenever you share the agent with others in your organization.

Agents created using SharePoint data are file-based. They are stored within the same site where they were created. Since they are files, you can manage them just like you manage other files. You can copy, move, delete, or archive them.