As AI agents become part of daily workflows, authorization is no longer just about humans. Guardrails are needed for both people and AI to ensure security, compliance, and trust. In this webinar, Frontegg CEO and Co-Founder Sagi Rodin sits down with Solutions Engineer Roy Daniel to discuss why authorization is foundational for the AI era and how to operationalize it without slowing innovation.
From real-world risks like cross-tenant data leaks to hands-on demos of Frontegg’s AI authorization features, the session explores what it takes to protect enterprises when agents can act on their own. Together, they break down why authorization is the product itself—not a feature that can wait until later.
Key topics:
Hi, everybody. I see we have a ton of people who have already joined us, so thank you so much for joining us today. I’m gonna wait a minute more just to see because I see a lot of people are still trickling in.
Alright. I think we can get started.
So welcome everybody to today’s webinar. We’re gonna be talking about why authorization matters for humans and AI agents.
So we are entering into a brave new world where both humans and AI agents need authorization around them. They need guardrails. So, you know, we have new actors entering. That’s, that is agentic AI, and things are moving super fast in this field. So that is what we’re gonna be talking about today. We’re gonna be joined by Frontegg CEO and cofounder, Sagi Roden.
He will be having a fireside chat, together with Roy Daniel, our solutions engineer.
So Roy will be walking us through a very quick demo of Frontegg’s authorization capabilities, both for our core customer I’m platform as well as for our Frontegg AI.
Those are our two sides of our business, but the main focus is gonna be on the fireside chat. Also, at the end of the webinar, there will be time for audience q and a. So, right now, if anybody would like to take a look at the upper right hand side of your screen, you’re gonna see a question mark icon.
So if during the webinar, something comes up for you and you wanna ask your question, just have it there, ready to go so it’s fresh on your mind. Feel free to write your question in the middle of the webinar. Otherwise, you can just wait until the q and a portion comes, and you can put your question in there. And, Saghi and Roy will be happy to address what’s on your mind.
And just to note that, the session is being recorded and will be sent to everyone who registered at the end. And if anybody has any concerns, beyond their questions, you know, something comes up, technically, whatever it is, just write it in the chat, through the webinar. We will be monitoring that.
So with that being said, I’d love to welcome Sagi Rodin, our CEO, and Roy Daniel, for our fireside chat.
Hey, everyone. Thank you so much for joining. Hey, Sagi.
Hey. Great to be here. Thank you, Lior. Thank you, Roy.
Thank you. I’m coming to you from the Boston area. Sagi is currently in New York. I see we have a lot of people. Washington, DC, Ohio, Denver. Thank you so much. Alright. So let’s just get started. We’re just gonna go through a few different scenarios, a few different questions.
If y’all have any questions, please submit them also in the chat, and we’re happy to answer them, either now or later.
So let’s get started. So so you just before we really, you know, dive in, let’s set the stage a little bit. So as Lior mentioned, we are talking about authorization in the context of both humans and AI agents. So let’s talk a little bit about the difference. Why are we talking about authorization specifically when it comes to humans and AI agents?
Yeah. So I think that it’s pretty much the hottest topic around.
We’re kinda seeing that AI agents becoming, definitely part of our workflows today, both as, you know, just human beings that are using the Internet, buying products, right, asking questions, personal questions, and sometimes mixing questions, from their workspace.
And we see basically more and more, I would say, usage of these interfaces within our workplaces, for, you know, the purpose of solving some of the day to day challenges we have, in our jobs. And, the move kind of the users from, you know, accessing the products and clicking buttons, to accessing AI platforms, to agents, that are acting on behalf of those users.
That’s definitely a shift that is, very, obvious today.
And, I think that once the software can act on its own, authentication becomes kind of the seat belt. Right? And, in Frontegg.ai, we, you know, are trying to deal with this challenge of, first of all, enabling the the the the vendors to build these applications, these agents, but in a safe manner, in a manner that, those can be actually used in world life, kinda, use cases.
And, and at the end of the day, this is definitely a big challenge that if we don’t solve it, it would withhold the whole progress.
Because if you put an agent out there and it tries, to find all ways to, you know, all paths, to to succeed and to, find a way to solve the request, and if the the requests are, fraudulent or or, not legit, then if we don’t stop it, it will do a lot of damage. So this is what we’re trying to do, how you keep speed and, you know, advance in operation, but still keeping it safe with clear guardrails.
And, you know, it’s, I think that if we succeed doing that, we will open up a whole new era of predictability and user experience, and that’s very exciting.
Yeah. Definitely. I can say, as someone who does a lot of demos for our clients, that, number one, we have a lot of clients, like, new companies, new startups that are trying to build AI agents themselves, right, as just their core product. But, also, we have a lot of companies that are looking to, you know, do things a little differently with this whole AI revolution that everyone’s talking about agents are gonna do, you know, all these different things.
And there’s so many security concerns when it comes to just letting someone, you know, handle your enterprise code, your enterprise space. Right? So, you touched on that a little bit, but let’s maybe talk a little more about, like, what are some of the unique challenges that we’re specifically trying to solve for.
Yeah. Maybe scopes, maybe, you know, anything like that. You know?
Yeah. So I think that, at the end of the day, there’s an evolution of, you know, predictability and what can you do with an agent. Right? So I think that, first of all, you know, there’s unique challenges in the way, that you can identify if the agent is, even real, who does it work for, does it work on behalf of a user, right, how do you know whether the agent that you’re working with right now is the same one that you, you know, interacted with, five days ago, but even identifies an agent. Right?
Then once you kinda understood that, you need to allow the agent to connect to the ecosystem, to the tools. Otherwise, it will be very hard to get real value out of it. And that has to be done in a secure manner, obviously, as well.
And I would say that the third thing, we’re starting with policies. Right? So once the agent is connected, we want to make sure that we have the clear guardrails to, to make sure that, you know, they’re acting on behalf of the user and and, and that it’s still true, first of all, and they didn’t went kinda rogue on their own.
But, also, once those guardrails are set, we want to make sure that everything is auditable and noticeable, and you get the right notifications once something happens. So I think it’s kinda those levels of, allowing it to work first, then allowing it to really get value, by connecting it to external tools, and then making sure that you have clear visibility over everything that’s happening. And I would say that each one of those are huge challenges today because almost none of those issues are really, really solved in an enterprise grade manner.
Yeah. And, you know, I’m sure everyone here worked with Claude or ChatGPT. And if you try to get it to do the same task, you know, a few times in a row, you’re gonna get slightly different results, and that’s not necessarily what you want. In an enterprise situation. You know? Like, I think another term that gets thrown a lot lately is, you know, digital employees.
And so we just talked about, like, what are the key authorization principles that they need. Right? So, the key ones, like you mentioned, are gonna be observability, like, to be able to actually monitor and audit and drill down into every small thing that that agent does. Right? We wanna make sure we understand that.
And, we have a few exciting releases, I think, coming for that specific problem soon too when it comes to policies. Right?
But we’re gonna also demo a few already existing features in a bit.
Yeah. Let’s talk maybe a little bit about the risks. Right? So, like okay. So we say we wanna be able to recognize that AI agent.
We wanna be able to have observability. Like, what are some of the risks if, you know, if we’re negligent, if we give it the wrong scopes? Like, what are we risking? Right?
Yeah. So, definitely, we’re risking real things. Right?
Think about cross tenant data leakages, and we’ve seen several attacks, or hacks that happened recently in, you know, on AI platforms, well known AI platforms.
Agents that run, you know, past a boundary, and that can make changes, you know, they can make changes they should not make. So, or the slowest. Right? Security blocks, the rollout, and, and your agent, you know, will never leave kinda the the pilot stage.
And teams sometimes think that, you know, that they can just tighten those permissions later and everything will be okay. But, the truth is that the permission model basically is the product.
It’s not a side effect. It’s not an addition.
It’s not an additional feature.
So definitely have to take care of that from day one.
Yeah. There was an interesting incident that happened about a month ago with Replit where, an AI agent deleted the entire production database of, one of the people who were testing it out, and then it also lied about deleting it. And so, you know, they had to, like, drill down several different questions until they actually got to have the agent admit, yes. Sorry. I deleted this. And then Replit had to, like, do a bunch of refactoring to make sure that doesn’t happen again. So it’s a huge challenge.
I would be interested for the people in the chat, what are your guardrails? What are some tools that you guys use? Like, I use cloud code. That’s, like, my tool of choice.
And you can limit access to certain files. You can do a lot of planning with it before you execute. Right? I found it to be incredible for small applications, for day to day usage, like, internally.
As you know, I built a tool for our team to help manage our clients, right, in a bit better way and better data collection. So for those things, it’s amazing. It’s a total game changer, and I’m really excited, yeah, to see, you know, to see what, what comes next when we actually reach that enterprise level. So from you you talk to a lot of VCs, a lot of CEOs, CTOs.
So what do you hear from them? Like, what are they looking for to aspire that confidence and to actually make that leap, right, to the next step of actually using this in production?
Yeah.
So I think that, you know, first of all, there’s definitely a lot of AI native companies that are building agents. We also see that there are SaaS companies, with successful existing products that are adding some, agentic capabilities, to their existing products. Right?
I think that, definitely, it’s an interesting era where, on the one hand, a lot of companies have created a lot of value over the years. And, and on the other hand, we see that, definitely, there’s, you know, new companies that are creating very fast values, through very quick, progress.
And that’s, you know, that’s fascinating on both of those ends. Right? Like, creating, innovation, zero to one that is happening with AI agents, but also helping existing products, just be, accessed or used, operated in a much faster manner, through, you know, copilots and, and AI. And, and, yeah, I I think that, one thing that we still don’t see a lot is, enterprise use cases of, you know, full agentic, operability where, you know, you would see organizations that are operating most of their SaaS applications, not through point and click, but through, through agentic, kinda describe and done, activities, the way that I like to call them.
And, why it’s not happening, why we’re still kinda receiving information or or, improving existing user experience or existing workflows and not, you know, doing the whole, the the whole operation or day to day operation through agents is mainly because, those are not trusted yet. So, we definitely see a gap where, as I mentioned, you need to identify the agents. You need to trust them. You need to build a credit score system where you trust this, you know, machine that is working in front of you that is kinda acting like a user, but is not really a user, and that it’s not, you know, taking the information of somebody else.
You can be rest assured that the info is well segregated, and not being leaked. It’s not asking you to, you know, expose the Social Security number of your boss or the credit card of, of your peers, and and, and you just provide the result. So I think that this is, these are still things that are keeping real progress, like, real revolution from happening. But, as, you know, great companies try to find solutions for, for these challenges, and I believe that, you know, we’re trying to stay in front of that with Frontec dot a I and future releases that we will have, we will definitely see that, we’re kinda, enabling, real transformation happening.
And a lot of SaaS applications in a year from now will not act, and will not be consumed the way they are consumed today. But things need to happen in order to unlock this value, and, you know, we’re doing that. We’re handling that, with a lot of hard work because you cannot miss. Just imagine what will happen when, you know, that goes, to production and, and a leakage or something like something that happened, like you mentioned, with rapid, will happen with, with your existing SaaS application.
So we need to make sure that it’s tied, that it’s trustworthy.
But then after that happens, there’s gonna be a very interesting era, a new era of SaaS.
Yeah. For sure. As I mentioned before, we’re seeing a lot of our clients that are starting to move into that direction.
I think, you know, it the sort of sarcastic take on it maybe is that people don’t wanna be left behind with their tech, but it’s really a lot of it is also about how can we rethink what we’re doing in our org, how can we improve, how can we be more productive. You know, I know that it changed the way we do things in Frontegg, like, our developers also use these tools every day.
Right? And we’re bringing our experience because we have six years of experience with end users, like, normal human user authentication and authorization.
And it puts us in an interesting position where we can really, you know, compare those different challenges. What does a human need? What does an agent need? What did clients use API automations for in the past? And how can we move that over to an agent that maybe can reason? And we’re gonna show an example of that, in a bit.
Cool. So I guess just looking ahead, you know, we have probably a few people in the audience that are building tools. Right?
So, where do you see the biggest opportunities for, you know, for companies, for newcomers, but also for enterprises to, you know, future proof their products, get better authorization?
And, also, if you were advising, you know, someone to build an AI agent today, what’s the, like, sort of one piece of advice that you would give them, around the access control and authorization?
Yeah. So I think that, you know, we need to think about policies, guardrails, and binding them to the tools and the actions straight away because, we never know what would be the end user’s interface, connecting to these tools that we expose. Right? So we cannot trust that, those guardrails will be applied on the, you know, the chat surface kinda.
So, it needs to be binded, to the tools. Right? We need to make consent a first class kinda citizen, so that users, you know, connect to the system that, through the agents, will have scopes that they understand. So that could be, you know, using permissions, like we’re used to using today with role based access control, etcetera.
We need to make sure that we have, you know, built in step up, flows and approval flows, human in the loop flows. Right, because we need to get ready for an era where we’ll get a lot of those requests.
And, those are free form kind of free language requests. Those are not point and click user interface buttons that informs like we used to, being in our, you know, SaaS application, traditional SaaS interfaces that we develop today.
I think that when you have that foundation in place, you will be able to quickly add new tools and, and, new kind of skills, to your agentic interfaces.
So, definitely think about those guardrails as you build the foundations. Don’t just think about the cool feature that you can release or the cool tool that you can release, for agents, but, actually, how also to prevent from, you know, data coming out where to places where it’s not supposed to, how we don’t break compliance, and you don’t expose your customers in a way that, you know, can damage their brand and your brand as well.
Yeah. Absolutely.
Cool. Sagi, thank you so much. Sagi is gonna stick around, and we’ll have a round of questions later
.
What I would like to do now is I’ll do my screen share, and I prepared a little demo to show you what we’ve been building. And, yeah, so a little sign of things to come as well. Just give me one moment, please.
Just bear with me because screen sharing is always fun as you all know.
Okay.
So I’m gonna share screen here. And, Remy, can you just give me a thumbs up if you Remy and Leor, if you can see my screen? It looks like it.
Just one moment. Sorry about that, guys. Okay. Cool. Perfect. Thank you, Leo. Alright. So, we’re not gonna get into, like, a full, you know, demo or front end, but just to give you, like, in a nutshell, really what we do.
So we handle user authentication and authorization. Right?
Now, basic premise, basic idea is we give you a very easy interface that’s no code where your clients can, you know, you can decide the UI and the authorization methods and the authentication methods. Right? So over here, you see, for example, if I make these changes, it’s gonna apply dynamically. Now, again, we’re not gonna get into all of this. I just want to explain that that’s part of what’s going to happen.
I have a sample application running our React SDK, and we have many, many different SDKs that we support, including mobile, web, and front end and back end.
And then, we have our dashboard. Right? So this is where you can manage all of your users and accounts in your front end account. And we support multi tenancy as well, and we support multi apps. So you can actually connect multiple applications under the same Frontegg end account and control who the user is assigned to each of these applications.
K? Now let’s just jump straight into the demo and the AI options here. So let me just quickly show you an example. So I’m gonna go into my app that’s running locally.
Let me actually log out real quick.
And cool. Now you see this hosted login page, the Frontegg host for you. You can connect your own custom domain. Right now I’m using the default one.
And I can authenticate either with my email or with my, you know, with Google social authentication.
Right? And it’s the same user. So I just did that. Now I’m authenticated into my Acme core sample application.
Right? And you see we have a little dashboard here on the left that shows us the current assignments, the current tasks that I need to get by. Right? So this is, like, my SaaS demo.
And then over here, we have our chatbots that we can toggle on or off.
Now the way this project works is using our, just following our documentation. So if you go to Frontegg, you can actually create a trial account, a thirty day trial account.
And this will take you, yeah, here to the docs if you look over here, and we show how to set it up.
We also walk you through the actual authentication, the authorization, and the available tools. Right? So very, very quick, just in one note, you know, you have the agent, so that’s what you’re seeing here on the right. Then you have the LLM under it.
In this case, I’m using OpenAI 4.0, which is a little old at this point, but that’s okay. We’ll get by. Right? So when you go and create an agent, you actually select the model provider.
We support all the major players, so you do need to bring your own API key, essentially.
And then we’re using Lambton as your orchestration. So that’s the, basically, the software that takes the prompt from the user, sends it over to, you know, ChatGPT, handles all the reasoning, right, and everything like that, and gives us the output back to us.
And then, my agent that I have right now is auto assign, and that’s another nice part here is that you can either have the agent available to all users, which is what I’m doing right now. K?
Or you can have it if it’s not auto assigned, and then you can actually go in here and assign it to individual users. Okay? So when you go to assign application’s agent, right now you see Roy AI is assigned to my user because it’s auto assigned. But if I wanted to, I could have complete control over which user is assigned, k, to which agent.
I can have different agents. Now the agents have tools.
So let’s look at a quick example of what that actually means in practice. I’m currently authenticated, and, hi. I’m Jenny. I’m your assistant. Blah blah blah. Right? We’ve seen these types of interfaces, and I can ask who am I.
Now what’s happening behind the scenes is I’m authenticated as a user. Frontegg issued a token for me, and I’m connected to my application that has access to the AI agent. Right? And that took me to reasoning inside of the LLM, which basically says the user wants to know who they are. I need to use Frontegg, you know, tools, basically, to get the user context and get the tenants and get the entitlements, and that is the output that we’re seeing over here. K?
You can see my email, my ID. Right? And then if you look at, like, a more legacy or, like, the way people used to handle this in the past, and, honestly, still use it a lot, but I think we are moving forward from this, is you’d have something like an admin portal, right, which we also provide from our SDK so I can make changes and, you know, seal that information. Right? But I think 2025, 2026, people are getting lazier by the day, and they just wanna talk to, you know, someone to an agent and just have it say everything that it wants. So that’s a small example. Now let’s look at maybe a more practical example.
So I actually prepared this repo for you. You can, go ahead and download it. Let’s see. I’ll add it in here, and then if anyone wants to try it out, completely free, then you can open your Frontegg account, and then you’ll basically have the same code I have here. Right?
Now let’s look at a slightly more practical example.
So I see here in my company announcements, we have, join our new team members. We have three new team members, joining. Right?
So I just created a prompt for it, like, ahead of time. Right?
Now here’s what it says. Actually, before we handle that, let’s try. I need to send a welcome message to my new VX. Sorry if I have typos. I do. That’s okay.
Let’s see what we get. Right?
So we’re gonna wait for the response. And, basically, what it does is it reasons behind the scenes. So it says, the user wants, you know, the user wants to send a message. What Slack channels do I have access to? Right?
And it knows that because of the tools. So if you go back to Slack, we have the Slack tool connected to our agent, and it has the permission to list channels and send messages to the Slack channel. And I can edit that and add scope or scopes or remove scopes. K? So, again, the user is passing a request.
This goes to Frontegg AI agent that you see here. It lists the available tools, and it knows that it can list channels and send messages in the Slack channels, and that’s how we get the message sent. K? Let’s look at another example.
I created a Jira account for myself, just a free Jira account, right, Atlassian.
And let’s say I have here a task for SSO implementation. I’m just gonna be super lazy and just copy this over.
And I’m gonna say create a Jira ticket under my name.
SSO implementation in progress. Right?
And let’s see what it does.
Now that’s really the beauty of it is that we are told okay. We could be sure the valid project key is required. Keep this provided for okay. Let’s say, list my project. Right?
So that’s, again, reasoning. It’s trying to see if it can post a Jira ticket. It’s checking okay. Yeah. We have the key to learn Jira. Right? Is this the one?
We have this is the MFLP.
Okay. So let’s use this one. Right?
So let’s create it and see what it does.
So it’s reasonably, it’s thinking.
Okay. Tickets created. It gives me a summary, right, status, and everything like that.
And if I refresh my page, there it is. It was created. Right?
So the beauty of it is I’m logged in as a a user. This user is already authenticated by Frontegg, so we know we can trust them. Right? We know they have access to our AI agent, and the AI agent has access to the different tools.
And, therefore, it can do everything we ask it to do, and I can speak to it in natural language as an end user, and it does all the reasoning for me behind the scenes. Now, you know, in theory, you can do all of this with roles and permissions. Right? But what you can’t do is actually keep that, sort of single sign on experience without actually authenticating into all the different services.
So the agent is actually authenticating into these third party services for us. Right? So you can see how this can be very powerful. I can change these commands together.
I can tell it, you know, create a new Jira ticket for me and message my colleague on Slack. And, I prepared a few other integrations. We’re not connected yet, but I could, like, set a, you know, a calendar event for me to remember to work on it. And, yeah, you’re getting the point of it.
Right? Now, just to mention this, we have a ton of integrations with all sorts of major ones that we’ve been asked to integrate with. We add these monthly. New ones are coming out all the time.
If you have any suggestions, any ones that you would want to see here, happy to, you know, check it out as well.
And just to show what the process looks like for setting up one of these, I prepared a little integration guide here. Again, it’s in the repo, so there’s a little MD file here. And super simple, you just go to this URL and create an OAuth 2.0 integration, and then you give it the scopes. So the scopes determine what the tool can do, what the agent can do using the tool.
Right? And here’s the callback URL. We’re also gonna give you that URL. So when you actually go and add the integration let’s go with Calendly.
This is the redirect URL for the OAuth 2.0 integration. K? And you assign it to an agent.
Yeah. And that’s basically it. Now, again, happy to hear if you guys have any questions or any use cases you think I didn’t cover here that you would like to see? Awesome.
Leor and Remy, I’ll let you take it away from here.
Alright. Thank you, Roy, and thank you, Sagi.
So I hope everybody got some good value out of that. I know it’s a lot and, you know, the space is super dynamic. So that was really in the weeds, and comprehensive for this product. And I would love to hear any questions you might have about that particular product, as well as more broadly the space, the direction, anything that is on your mind.
We’re gonna now have an audience q and a. So feel free to ask technical deep dive questions of either Sagi or Roy or both, as well as broader questions. We welcome them all. So let’s see what we have.
And by the way, you click on the question mark icon for q and a.
So it’s gonna be on the right hand side.
It’s gonna give people a couple of minutes.
Alright. I didn’t know.
Getting private messages. Alright. So, I got some messages, separately. So hold on in the chat. Okay. So one of the questions is about our roadmap. So, what’s on the road map, for Frontegg, AI capabilities? And how are we, planning on better serving, builders of AI agents?
Yes. So maybe I can take that.
So I think that, we’re trying to, kinda see, or take a holistic view over, this evolution that we’re going through and, going both kinda wide and deep into the use cases.
So definitely, ongoing deep. We’re seeing more and more, multi-tenant requests, more tooling, around enabling building agents and access to the ecosystem and building built in guardrails in those regards. When we’re thinking about, enabling more use cases, then definitely, we hear a lot of struggle from existing SaaS owners on, how they can, kinda open up more agentic interfaces within their product, and that’s something that, that, we definitely see as, something that is, enablement for a real, kinda step forward in this, in this agentic revolution that we’re going through. And, and we’re working a lot to try and solve those challenges as well.
And, yeah, stay tuned because, I think in the coming weeks, we’ll see a lot of great stuff coming, out of our product team, exciting news that will be coming as well in those areas. And, you know, that’s that’s really, really, exciting at least for me because, you know, just as we enabled all other interfaces and types of, challenges for our customers to really step in and allow their usage in their product in a much, more, interesting, and value creating way, that will definitely continue to happen.
Yeah. Great. Thank you so much, Sagi. Finally got the chat to work properly. So now I can see people’s names as well. So I’m gonna bring this one up.
So we have a question from Mark. Here we go. Okay. We have a question from Mark Harkin. You mentioned guardrails, but I find the surgeon general’s warnings for most tech nowadays is for the users and not for the technology. To be honest, how do we attempt to correct that? So whoever would like to take a stab at that.
So I think the question is, like, okay. You talk a lot about the users. Right? User error, but not the technology.
I can say from my perspective, when it comes to actually guardrails on the technology itself and talk about guardrails for AI, for LLMs.
Right? So the first thing is literally what can the agent do. So if you saw in the demo earlier, I showed you the different scopes. So I tell my clients, like, be as less permissive, like, the least permissive you can be with your AI agents.
Give it the minimum amount of permissions, and scopes that you can give it. I think that’s a good first principle. Right? If it doesn’t have access to something, it can mess it up, basically.
Right?
Sagi, do you have any other sort of, like, how do we attempt to basically protect against those technologies?
Yeah. You know, I see that, I see the point. Right? Like, a lot of the industry’s guardrails today are really kinda this surgeon general style warnings. Like, don’t prompt this way. Don’t expose this data.
And, you know, to be honest, I don’t buy that model, obviously. So what we’re trying to do at Frontegg dot ai is kind of flip it back onto the system.
Guardrails should be enforced in code, you know, not just, you know, printed.
And, you know, if a support agent is only allowed to issue a two hundred dollar refund, then the system literally will not let it issue, two two hundred and one dollar refund without an approval from somebody. Right? And if an agent is bound to some tenant, like, to an Acme tenant, it physically cannot pull data from, you know, Betacorp, tenant.
So that would just not be impossible, and we have to enforce that. I think that’s also, like, a mental shift, you know, which treat AI agents like they were real employees, right, or real users within our product, and, and, and really add, some very, you know, strict enforcement on the on the data and on the activities, on the actions.
And, and that’s that’s the shift that has to be made. I will definitely say that, you know, as with every new technology, we remember the same from when mobile was introduced and when, API started to become a standard. Those things take time, and this is why, usually, the activity is also taken into, you know, those kinda staging use cases and then to real production use cases and then to sensitive use cases.
That’s why some of the tools today, even if you go to, you know, to your cloud or ChatGPT, you can access Google Workspace, you know, Google Calendar, but you cannot perform, you know, actions on your account. You can only pull data. And this is exactly, the kinda, the step by step, evolution that is happening. And, I believe that, you know, this evolution is happening very fast.
So in six months from now, we will see a completely different level of enforcement and thus also a completely different level of, of, capabilities and options, to utilize the technology.
Alright. Thank you, Sagi. Yeah. And, guys, you know, follow Frontegg on, LinkedIn and on different social channels because, we’re still soon gonna be rolling out, a solution that addresses some of these things, taking it in a new direction, allowing, you know, agents to do more, and fully realize their potential in a safe way. So watch this space. Alright. So we’re gonna start to wrap up here.
Thank you, guys, so much for joining and thank you to our speakers, Roy and Sagi.
I’d like to just mention that, we have let me get back there. So, in the next day or two, all attendees will be receiving either a three in one charger, for those of you in the US, I think most are, or a gift card if you’re outside of the US. So you’ll be getting an email very shortly. You’ll also be receiving soon this webinar recording on demand. So you can always look back, or share with your friends.
And visit Frontegg.com, and frontaig dot ai if you would like to try this out for yourself.
It’s pretty easy to onboard. You can try both capabilities. And, Frontegg.ai, also, you’ll find it from there, also as a standalone page. You can go sign and try it for free, and, feel what it does, hands on for yourself.
So, thank you again, to all of our attendees and to our speakers, and I hope you have a great rest of your day.
The Complete Guide to SaaS Multi-Tenant Architecture