Project Progress
LLMs
- Setting up LLM environment locally (llama2-7b-chat).
- Working with Pascal to develop self-hosted LLMs (developing API).
- [WA Data & LLM Platform]
- [LLM API]
- [Bring LLM to WA Community]
graph LR; Client-->|LLMTaskRequest|APIServer; APIServer-->|Response|Client; APIServer-->|QueueTasks|TaskQueue; LLMWorker-->|GetTask|TaskQueue; TaskQueue-->|PopTask|LLMWorker; LLMWorker-->|Response|APIServer;
Simulator
- Setting up basic environment for the project.
- Dockerizing the simulator.
- prepare for integrations.
TODOs
- [Paper].
- Research Proposal:
- Construct a framework for integrating LLM with robotics in the context of navigation.
- *Reasoning Accuracy:
- Multi-modal LLMs vs text only LLMs - Spatial-temporal reasoning issue.
- Common way in LLMs: Prompting, RAG, and Fine-tuning.
- Reasoning Latency:
- On-premise open-source LLMs vs Cloud-based LLMs.
- Scaling LLM size (light-LLM)
- Reasoning security:
- Evaluation framework.
- Method:
- Eyesim, Carla, ShuttleBus, Indoor robot?
- LLM optimisations:
- Prompting: Chain of thoughts, Visual QA
- *RAG: relational, knowledge graph, vectorstore, attention.
- Fine-tuning?
- *Reasoning Accuracy:
- Construct a framework for integrating LLM with robotics in the context of navigation.
Additional
- Add name on paper?
- Kaya account for using GPUs.