diff --git a/README.md b/README.md index 04f7891..2ca9e60 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ This personal research explores abuses and vulnerabilities in AI models, with a focus on backdoor attacks in Large Language Models (LLMs) and related systems. -The main content is the Jupyter Notebook [Abuses and Vulnerabilities in AI Models.ipynb](Abuses_and_Vulnerabilities_in_AI_Models.ipynb), which provides: +The main content is the Jupyter Notebook [Abuses and Vulnerabilities in AI Models.ipynb](./docs/Abuses_and_Vulnerabilities_in_AI_Models.ipynb), which provides: * An overview of backdoor attacks during training and inference. * Discussion of prompt injection and other vulnerabilities. @@ -21,11 +21,11 @@ The main content is the Jupyter Notebook [Abuses and Vulnerabilities in AI Model * References to relevant research papers. * A summary of the CVSS (Common Vulnerability Scoring System) for evaluating the severity of vulnerabilities. -See the notebook for a detailed exploration of these topics. Or get pdf version [Abuses and Vulnerabilities in AI Models.pdf](Abuses_and_Vulnerabilities_in_AI_Models_Backdoors_in_LLMs_and_Beyond.pdf) is also available. +See the notebook for a detailed exploration of these topics. Or get pdf version [Abuses and Vulnerabilities in AI Models.pdf](./docs/Abuses_and_Vulnerabilities_in_AI_Models.pdf) is also available. ## Slides -A presentation summarizing the research is available at the following link: [AI Model Vulnerabilities: Backdoors in LLMs and Beyond](https://example.com/slides) +A presentation summarizing the research is available at the following link: [AI Model Vulnerabilities: Backdoors in LLMs and Beyond](https://abuses-and-vulnerabilities-in-ai-models-569cfd.gitlab.io/) To deploy locally, using pnpm and slidev, run the following commands: