If you examine the announcement on https://openai.com/blog/function-calling-and-other-api-updates function calling does not actually give LLMs the power to invoke functions directly.
Instead it gives them the power to populate JSON models of the arguments to functions.
Hopefully ones for which it has a good model.
So there is a new output from such models. In addition to language they can output parameters. Though this is not such a stretch given that they can already output code which include functions and parameters.
These can be checked to see if they follow the rules of the function itself in the same way that written output must conform to the rules of the language.
It is an obvious evolution as:
- It plays to the models strengths - creating seemingly sensible patterns
- Its what many applications of chatGPT are already doing.
The model itself will not invoke the functions but its easy to imagine invoking a function which will provide new prompts.
I think its very wise to break the feedback loop here as immediate invocation of arbitray functions could have undesirable consequences.
I expect we will see some consequences from people experimenting with that in the near future.
The limitations are less obvious but you can imagine it falling prey to its own hallucinations.