We are looking for an enthusiastic and detail-oriented AI Intern to work on text processing and Large Language Model (LLM)-based applications. This role involves building, fine-tuning, and optimizing language models for various natural language processing (NLP) tasks such as text classification, summarization, information extraction, and conversational AI. You will collaborate with our AI research and engineering teams to develop cutting-edge solutions in the NLP domain.
ESSENTIAL DUTIES AND RESPONSIBILITIES:
- Assist in the design and development of NLP pipelines for tasks like entity recognition, summarization, sentiment analysis, and question answering.
- Preprocess and clean text data for training and evaluation of LLMs.
- Experiment with state-of-the-art LLM frameworks (e.g., Hugging Face Transformers, OpenAI API, LangChain) to solve real-world text processing problems.
- Support fine-tuning of pre-trained models on domain-specific datasets using techniques like LoRA, PEFT, or full model training.
- Develop and maintain prompt engineering strategies for improving model performance.
- Research and implement techniques for RAG (Retrieval-Augmented Generation) and context optimization.
- Document experiments and present findings to the team.