AI assistants are spreading rapidly. They are being used to answer emails, make reservations, arrange travel, handle schedules, undertake research, summarize meetings, or automate financial transactions, among other things. Despite their growing popularity, though, many consumers and organizations do not understand what they are, how they operate, what their benefits and risks are, and how they should be evaluated.
AI assistants have evolved and can now play constructive roles in government, business, and the consumer market. Yet there needs to be basic rules in place in order to protect privacy, maintain security, and guard against fraudulent actions. Policymakers should act quickly to protect consumers and make sure these automated assistants operate fairly, responsibly, and equitably.
How they operate
AI assistants are automated software that make decisions and execute operations based on predetermined criteria. They use artificial intelligence (AI), machine learning, and natural language processing to tailor technology to personal needs, inform the software how they want particular things to be handled, and rely on that software to execute those wishes.
When used properly, these kinds of assistants are powerful tools for personalizing computer utilization and having software handle particular functions. They can help people cope with the administrative and logistical demands of modern life, while still giving them control over basic tasks. They can act autonomously on multiple tasks and be personalized to the needs of their human operator, which explains some of their current appeal. People are bombarded with so much information and so many online tasks that they need some way of dealing with the digital onslaught and taming their information flows.
Possible benefits
AI assistants offer a range of benefits. One is the simple convenience of applications that understand people’s preferences and act accordingly. For example, you can link your email to an AI assistant and have it provide guidance on basic correspondence. Unwanted emails can automatically be sent to a trash file. Scheduling requests can be routed through your calendar and prioritized based on your relationship with the sender. Travel can be arranged based on your preferences for cars, planes, or trains, and particular requirements you have for personal or professional trips. Rent or mortgages can be paid at the beginning of the month through links with your financial institution.
Software is very efficient at executing these wishes. If you have routine decisions, you can automate those activities based on your specified preferences. Control of your time is one of your most important assets, and software can enable you to automate known activities and execute them based on your wishes. This represents a way to replicate the behavior of a human assistant but in the form of a computerized app.
AI assistants can also improve people’s productivity. Technology can be a force multiplier in terms of relieving people of boring, monotonous, or routine tasks. Users can automate multiple functions, and this delegation will allow them to focus on more interesting or creative tasks. In that way, advanced technology can bring a number of benefits to those who understand how to deploy it effectively.
Possible risks
Despite a number of benefits, AI assistants can introduce several risks. First is a possible invasion of personal privacy. In order for computer applications to operate, they need access to confidential information such as your email account, calendar, travel preferences, or financial information. Depending on how you set up your assistant, you may need to give it information about your banker, doctor, lawyer, and/or accountant to have it schedule appointments or execute particular actions based on your wishes.
Consumer protections are only as good as the security associated with the software. Safety must be a top priority given the significant risks—from hacks and data breaches to ransomware and other unauthorized intrusions. Because AI assistants often handle sensitive or confidential information, it’s essential to ensure they are properly secured and equipped with strong protections against misuse or unauthorized access.
Known risks of any online application are fraud and malfeasance. As more and more of people’s activities have moved online, criminals have followed the money. Personal theft and fraud have skyrocketed in recent years, and people have to be very careful in what they do online and how they link their data systems. If several sources of your personal information are connected, it can multiply the risks substantially.
How consumers should think about AI assistants
There are a couple of factors consumers should consider when deciding whether they should adopt an AI assistant. One is how much they trust digital tools to handle personal or sensitive information responsibly. Many individuals have doubts about privacy and security, making it difficult to trust software with automated decision-making. If you are highly suspicious of the digital world and not comfortable with pre-ordered operations, you probably should wait before you commission a digital assistant.
Another big consideration is the consumer’s confidence in the commercial provider. Is it a U.S. or non-U.S. firm? Does it have a reasonable track record in terms of maintaining privacy and cybersecurity? How does the business deal with data breaches, and how long does it take to inform users of hacks or unwanted intrusions? As with any major consumer purchase, people should check out the application, evaluate alternatives, and be aware of how particular tools operate and what their level of risk is. Buyer beware is always good advice in the consumer market.
How businesses should evaluate them
Businesses need to think about what kind of operations make sense to automate and what protections to put in place to ensure safety and security. They should take care not to expose core company assets or irreplaceable information that could be compromised by breaches or fraud. Further, privacy design standards should be considered throughout the design and implementation processes for AI assistants to reduce any reputational risk the company may face.
Firms also need serious training programs for their personnel to ensure wise decisions on the part of their workers. In most organizations, humans are the weak link. People click on suspicious links, access databases in unsafe ways, or use passwords that are not very secure. Training people to recognize known risks is a prerequisite for the enterprise deployment of AI assistants.
One way to guard against known risks is by having insurance. When introducing AI assistants, companies can insure themselves against data breaches or loss of confidential material, which can help them manage risks and reap the advantages of assistants without exposing themselves to widespread dangers. As is true with other problems such as fires, accidents, or legal liability, insurance represents a way to narrow the risks while still taking advantage of new tools.
How governments should help consumers and businesses
As with any new application, government needs formal oversight of digital deployment and utilization. Government agencies regulate many aspects of finance, health care, and e-commerce, among other areas. The use of automated software should not exempt tech companies from the safeguards already required of brick-and-mortar businesses. Having strong consumer protections in the digital world is important and represents a way to build consumer trust in the long run.
Policymakers should ensure AI assistants are transparent in how they carry out commands, responsible in the tasks they automate, safe and equitable for users, and capable of accurately verifying identities to prevent fraud or misuse. Without basic safeguards against fraud, data breaches, and hallucinations, people will not develop the trust required to ensure the long-term sustainability of this tool.
Finally, public officials should invest in digital infrastructure that powers AI applications. That includes strengthening support for data centers, the power grid, and local permitting reforms, as well as closing the digital divide to ensure everyone who wants access to AI assistants can get it. Without proper support for these things, it will be hard to develop safer and more secure versions of this new technology.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Should consumers and businesses use AI assistants?
June 24, 2025