updated links for the new repo

This commit is contained in:
Stefano Rossi 2025-07-10 00:51:37 +02:00
parent ee9158fae5
commit 657bb26f7c
Signed by: chadmin
GPG key ID: 9EFA2130646BC893

View file

@ -12,7 +12,7 @@
This personal research explores abuses and vulnerabilities in AI models, with a focus on backdoor attacks in Large Language Models (LLMs) and related systems.
The main content is the Jupyter Notebook [Abuses and Vulnerabilities in AI Models.ipynb](Abuses_and_Vulnerabilities_in_AI_Models.ipynb), which provides:
The main content is the Jupyter Notebook [Abuses and Vulnerabilities in AI Models.ipynb](./docs/Abuses_and_Vulnerabilities_in_AI_Models.ipynb), which provides:
* An overview of backdoor attacks during training and inference.
* Discussion of prompt injection and other vulnerabilities.
@ -21,11 +21,11 @@ The main content is the Jupyter Notebook [Abuses and Vulnerabilities in AI Model
* References to relevant research papers.
* A summary of the CVSS (Common Vulnerability Scoring System) for evaluating the severity of vulnerabilities.
See the notebook for a detailed exploration of these topics. Or get pdf version [Abuses and Vulnerabilities in AI Models.pdf](Abuses_and_Vulnerabilities_in_AI_Models_Backdoors_in_LLMs_and_Beyond.pdf) is also available.
See the notebook for a detailed exploration of these topics. Or get pdf version [Abuses and Vulnerabilities in AI Models.pdf](./docs/Abuses_and_Vulnerabilities_in_AI_Models.pdf) is also available.
## Slides
A presentation summarizing the research is available at the following link: [AI Model Vulnerabilities: Backdoors in LLMs and Beyond](https://example.com/slides)
A presentation summarizing the research is available at the following link: [AI Model Vulnerabilities: Backdoors in LLMs and Beyond](https://abuses-and-vulnerabilities-in-ai-models-569cfd.gitlab.io/)
To deploy locally, using pnpm and slidev, run the following commands: