-2

TLDR: If an AI system on behalf of a company makes a claim, is the company compelled to abide by that claim?

Background:

I was hoping to sign up for an account with a leading AI company in order to try out their products. During the sign up process they wanted a phone number. The would not accept my VOIP number, insisting on a traditional cell phone number. A quick internet search yields at least one Reddit thread from others indicating that they began receiving spam texts and calls shortly after providing this company with their phone number. I also came across this help article on their site indicating that they use the phone number to "verify your account" and that they "don't use your phone number for any other purposes". This help article is unique from all their other articles in notating that it was generated by their AI. Their privacy policy claims "We disclose Communication Information to our affiliates and communication services providers."

  • If I were able to prove this company were to have abused my contact info despite their published claims, would there be any legal liability? Would being under the protections of CCPA or GDPR make a stronger claim?

I then thought about AI support features which some banks and other companies are using these days. I assume that these programs are either decision tree based or fed a very narrow pool of true information to act on, but that is just my assumption. As these support programs continue to be developed it is inevitable that they may include AI similar to this. For example:

Suppose an AI "agent" tells me that there is currently a promotion for opening a savings account with a minimum amount of $1000 providing a $500 cash bonus for maintaining that minimum balance for 6 months as well as guaranteeing for 12 months 5.375% APY interest.

  • Could the bank be held to those terms? (This is similar in my mind to Is what the customer service of a health insurance company tells one of their customers legally binding? except that this one involves an AI rather than a human agent)
  • What if the terms of the offer are less specific such as not indicating a dollar amount for the bonus or a period for the interest rate?
  • What if the terms of the offer are completely outlandish such as 100x bonus or 100% APY?
  • Does it make a difference if the logs show "prompt engineering" the "agent" into making these claims?
  • If the bank cannot be held to those terms, what prevents the bank from turning a blind eye to these claims in the interest of getting accounts?
  • What if the "agent" is just a simple decision tree but was given "untrue" information, i.e: Does the behind-the-scenes tech make a legal difference?
user_48181
  • 125
  • 5
  • I apologize in advance for my formatting (be glad you don't have to endure my Power Points!) If someone wishes to dress this up better, I would greatly appreciate it. – user_48181 Dec 13 '22 at 23:48
  • 3
    You misunderstand the facts related to your fictional example, and that will muddy the water. Banks that provide chatbots do not empower their chatbots to negotiate the terms of products or services. The scenario you imagine, where a clever user manipulates the chatbot into some kind of outrageously one-sided business deal, can't arise. For the sake of getting a clear answer that addresses your real question, I recommend that you completely remove the fundamentally flawed hypothetical. (Source: I worked on a bank chatbot, including a bunch of competitive research.) – Tom Dec 14 '22 at 02:34
  • @Tom I appreciate your insight into the inner workings of the product you have worked on. I did acknowledge that I assume that is how most work. There are other chatbots on the market as well and they will continue be developed. I see my aside got truncated before I posted and will edit the question to re-add it now. – user_48181 Dec 14 '22 at 16:16
  • 1
    Typical Website Chatbots do not contain AI in generating their answers at all. They check your questions against a catalog of keywords and what you might mean and then reply with one from the pre-set answers that most likely matches. You can not manipulate those, the chatbot has no ability to alter the answers from their catalog. – Trish Dec 14 '22 at 18:00
  • @Trish I agree with that assumption for the set of chatbots I have personally interacted with to date. It is, however, only a matter of time before more advanced technologies like this are leveraged (if they haven't already been). That is the entire purpose of introducing the hypothetical: to focus the scenario to the use of an AI backend rather than canned responses for people with an understanding of how some of these technologies work today. The question is at the top of the post. I've used the example to formulate probing questions I have about the general case [continued] – user_48181 Dec 16 '22 at 20:08
  • However nothing should limit the interpretation of the general question to the specific case of a chatbot on a bank website. If you care to suggest an edit that would soften the edges of your concerns I would be open to accepting it. – user_48181 Dec 16 '22 at 20:11

1 Answers1

0

AIs are not people and cannot make claims

Companies are people and if they make legally binding commitments then they are bound by by those just as a human is.

Dale M
  • 208,266
  • 17
  • 237
  • 460
  • Thank you for your time. Can you expand on this? Would claims provided to the customer via the AI provided by the company be considered as direct claims of company then? Or, are you saying that because the AI is not human nothing provided via the AI can ever have legal bearing on the company? – user_48181 Dec 14 '22 at 16:26
  • 2
    The point is that these claims are not made by the "AI" but by the company. So they can't say "that wasn't us, that was just our clueless AI making these claims", the company is responsible. Likewise if I as a company employee answered your phone call, the company would be responsible for claims I make. (The only difference is that a company could sue its employee for doing very stupid or criminal things, they can't sue their AI). – gnasher729 Dec 14 '22 at 16:46
  • Technically, companies (which need not be corporations) are abstractions and are not people, so by your reasoning a company cannot make a claim, only an actual person can make a claim. What court has held that companies and natural people but not programs can "make claims"? – user6726 Dec 14 '22 at 17:12
  • 2
    @user6726 No, companies have legal personhood but aren't physical. They act through their agents, which for example their employees or contractors. – Trish Dec 14 '22 at 17:55
  • @Trish, so are you saying that legally speaking, a company makes claims and a program does not, or are you saying that both a company and a program make claims, or neither? – user6726 Dec 14 '22 at 18:02
  • 2
  • @user6726 They both make claims, but a company has legal responsibility for its claims and the AI doesn’t. Also, companies have money and can pay damages, while an AI (currently) can’t outside science fiction. And anyway, even if the AI had money, it wouldn’t have legal responsibilities. – gnasher729 Dec 14 '22 at 21:39