0

In LLM, in order to avoid discrimination and abuse, the author will add filter and watermark functionalities. Could you tell me in which process these functionalities take place? Is it in weights(pertaining), output layer(transformer), or fine-tuning?

Can I remove the filter by myself if I have an open-source AI like alpaca?

zzzgoo
  • 161
  • 4

1 Answers1

1

Alpaca:

In LLM, to avoid discrimination and abuse, you can add filters such as adding noise/jittering to images or applying a watermark on them. This process typically takes place in the output layer (transformer) of an AI model. You cannot remove these filter functions by yourself if it is implemented using closed-source software like Alpaca because then you won't have access to its source code and will not be able to modify or replace any existing filters with your own implementations.


GPT 3.5:

The addition of filter and watermark functionalities in LLM typically takes place during the fine-tuning process, where the model is trained on a specific task or dataset. As for removing the filter, it would depend on how the filter was implemented and whether it can be easily modified or disabled in the code. If the AI is open-source, you may be able to make changes to the code to remove the filter functionality, but it would depend on the specific implementation and any licensing restrictions.

  • 1
    Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center. – Community May 14 '23 at 19:57