Brown Assistant

The goal of this project is implementing a Retrieval-Augmented Generation (RAG) system that helps all students at Brown University explore the courses that are open in the semester and understand their degree requirements by combining two sources:

A FastAPI backend is implemented to handle data processing, and serve queries efficiently. In addition, a user-interface is built to provide an interactive interface for users.

Note: Currently, the concentration section is specific to undergraduate programs, and the course-related information can be taken only for the Fall 2025 semester. But this can be easily extended to other programs and semesters.

Demo

End-to-End Workflow

1) Data Acquisition

2) Indexing and Vectorization

3) Retrieval and Generation

4) Serving

In addition,

Models Used

1) Bi-Encoder Embeddings

2) Vector Database for Indexing and Retrieval

3) Cross-Encoder Reranking for Retrieval

4) Generator

Observations and Future Development

Currently, the pipeline performs well for most questions. Wrong answers usually occur when the question is too short, when it contains only technical words (e.g., APMA 2230, CSCI 0320, ECON 2950, etc.) without context and/or when the wrong department or concentration is selected in the UI. To solve the issue of poor performance with technical words, I tried to integrate a sparse retriever, BM25, along with the dense retriever, retrieved and then re-ranked chunks with a Cross Encoder, but this did not help much. Therefore, I removed it.

In addition, retrieval + generation process usually takes between 2-5 seconds. Sometimes it may take up to 30 seconds for the chunks to be retrieved from the vector store and for an answer to be generated. To reduce the latency, Cross Encoder part is currently disabled. But it can be activated by specifying rerank_top_n and rerank_min_score parameters in the rag.retrieve() function in api.py.

Although significant amount of time is spent for text generation, caching is still one of the important features to add and it will be integrated into the process in the next steps considering the fact that the same or similar questions can be asked by hundreds/thousands of other students as well. FAISS vector database can also be tried instead of ChromaDB to check if retrieval time reduces significantly.

Running Locally

1) Create .env

Copy example environment and edit:

cp env.example .env

Open .env and set OPENAI_API_KEY and API_TOKEN

2) Build and Run

docker-compose up --build

3) Access

First run notes

Running on AWS EC2

1) Create EC2 Instance

2) Connect and Prepare the Machine

ssh -i your-key.pem ubuntu@your-public-ip
  
sudo apt update && sudo apt upgrade -y
sudo apt install -y docker.io git
sudo systemctl enable --now docker
sudo usermod -aG docker ubuntu
  
# Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
  
exit
ssh -i your-key.pem ubuntu@your-public-ip

3) Deploy Code

Clone your repository and configure environment:

git clone https://github.com/ozyurtf/brown-assistant.git
cd brown-assistant
cp env.example .env
nano .env   # set OPENAI_API_KEY and any other variables

4) Launch Services

docker-compose up -d --build
docker-compose up

5) Access