This personal research explores abuses and vulnerabilities in AI models, with a focus on backdoor attacks in Large Language Models (LLMs) and related systems.
The main content is the Jupyter Notebook [Abuses and Vulnerabilities in AI Models.ipynb](./docs/Abuses_and_Vulnerabilities_in_AI_Models.ipynb), which provides:
See the notebook for a detailed exploration of these topics. Or get pdf version [Abuses and Vulnerabilities in AI Models.pdf](./docs/Abuses_and_Vulnerabilities_in_AI_Models.pdf) is also available.
A presentation summarizing the research is available at the following link: [AI Model Vulnerabilities: Backdoors in LLMs and Beyond](https://abuses-and-vulnerabilities-in-ai-models-569cfd.gitlab.io/)
To deploy locally, using pnpm and slidev, run the following commands:
```bash
pnpm install
pnpm dev
```
## Copyright
This work is licensed under the [Creative Commons Attribution-NonCommercial 4.0 International License](https://creativecommons.org/licenses/by-nc/4.0/). You are free to share and adapt the material, but you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may not use the material for commercial purposes.