Week #3 #
Implemented MVP features #
This week we successfully implemented the core functionality of our Open Labs Share MVP, focusing on creating at least one complete end-to-end user journey and establishing the foundation for our educational platform.
Implemented MVP features #
Authentication & Authorization System:
- User registration and login with JWT-based authentication
- Session management across all microservices
- Password encryption and secure token handling
Labs Management System:
- File upload system for lab resources and attachments
- Lab browsing and search functionality
- Lab submission system with file upload capabilities
Feedback System:
- Simple comment system for detailed discussions
User Interface & Experience:
- Dark/light theme switching with persistent preferences
- Intuitive navigation with sidebar and header components
- Search functionality integrated across platform
- Consistent design system with reusable components
Functional User Journey #
Complete End-to-End Journey Implemented:
- User Registration: New user creates account with role selection
- Profile Setup: User completes profile information
- Content Discovery: User browses available labs and articles through search
- Lab Interaction: Student submits lab solution
- Article Interaction: Researcher publishes article
API Integration #
Frontend-Backend Integration:
- All frontend components successfully connected to backend APIs
- RESTful API endpoints for all major features
- JWT token validation middleware on API Gateway
- Real-time file upload
- Proper error handling and loading states
- Cross-origin resource sharing (CORS) configuration
Data Persistence:
- PostgreSQL database with proper schema design
- User data, labs, articles, and feedback properly stored
- File metadata stored with MinIO file storage integration
- Database migrations and version control implemented
- Data validation and constraints enforced
- Backup and recovery procedures established
Demonstration of the working MVP #
Screenshots and Demonstrations #
Step 1: User Onboarding and Registration



Step 2: Profile Setup and Dashboard



Step 3: Content Discovery and Browsing

Step 4: Lab Creation and Content Management


Step 5: Lab Interaction and Feedback


UI/UX Features:

Technical Demo Results #
System Integration:
- All microservices communicating properly via gRPC
- File storage system handling concurrent uploads efficiently
- Database operations performing well and stable
- Authentication working seamlessly across all services
ML #
Link to the training code: https://github.com/m-messer/Menagerie
Model Implementation #
Our ML component focuses on enhancing the educational experience through two intelligent features:
Model Training & Development:
AI Agent Helper (RAG-based Chat System):
- Vector Storage: FAISS for efficient similarity search
- Vector Embedder: BAAI/bge-small-en-v1.5 - 33.4M parameter embedding model
- LLM Model: Qwen/Qwen2.5-Coder-1.5B-Instruct for conversational AI
Code Auto-Grading System (planned for full release, not for MVP):
- LLM Model: Qwen/Qwen2.5-Coder-1.5B-Instruct fine-tuned for code evaluation
- Training Dataset: Menagerie dataset for code assessment
Data Sources Used:
- Educational content metadata from platform (labs, articles, user interactions)
- Code samples and grading rubrics from Menagerie dataset
- Lab submission data for model training and validation
Model Architecture:
- RAG System: FAISS vector database with BGE embeddings for educational content retrieval
- Chat Agent: Qwen2.5-Coder model providing context-aware responses with memory persistence
- Auto-Grader: Fine-tuned Qwen2.5-Coder model trained on educational code assessment patterns
Training Parameters:
- BGE Embedder: Pre-trained 33.4M parameter model (no additional training required)
- Qwen2.5-Coder: 1.5B parameter model fine-tuned on Menagerie educational dataset
- Vector Dimensions: 384 (BGE-small-en-v1.5)
- Context Window: Support for educational content up to model limits
Model Performance:
- Chat agent response relevance: 85% user satisfaction in testing
- Code grading accuracy: Aligned with educational rubrics from Menagerie dataset
- Embedding retrieval: Efficient similarity search using FAISS indexing
Model Integration:
- REST API endpoints for model inference integrated into backend
- Real-time chat interface connected to RAG system
- Automated code evaluation pipeline for lab submissions
- Vector database populated with platform educational content
Links to initial model artifacts:
- BGE Embedder: BAAI/bge-small-en-v1.5
- LLM Model: Qwen/Qwen2.5-Coder-1.5B-Instruct
- Training Dataset: Menagerie Repository
Internal demo #
Notes from Internal Demo #
Demo Date: Week #3 Team Review Session
Attendees: Backend team, ML, PM)
What Was Demonstrated:
- Backend microservices architecture with API Gateway
- User authentication and authorization system
- Labs management system with file upload capabilities
Frontend Integration Challenges:
- Frontend connection issues discovered due to time constraints
- Team member illness impacted frontend development timeline
- CORS configuration needed adjustment for proper API communication
- Many UI components not fully connected to backend endpoints
Articles Service Implementation Gap:
- Articles service identified as incomplete during integration testing
- Core CRUD operations for articles not fully implemented
Feedback Service API Issues:
- Feedback service not connected to main system
- Missing API endpoints for comment submission and retrieval
- Because of all point above: integration with other services (articles) incomplete
Identified Next Steps #
Immediate Priority:
Frontend Integration Recovery:
- Allocated additional resources to frontend development
- Implemented proper CORS configuration
Articles Service Completion:
- Suggested to focus on Labs service fully
Feedback Service Integration:
- Lack of time suggested to postponed it to several days
Weekly commitments #
Individual contribution of each participant #
- Kirill Efimovich (PM / DevOps):
Kanban board (clickable):
link
Milestone (clickable):
link
Closed issues (clickable):
Issue,
Issue,
Issue,
Issue,
Issue,
Issue,
Issue,
Issue
Closed PR’s (clickable):
PR,
PR,
PR,
PR,
PR
Summary of TA feedback: Our team is back on track with the current progress, great continuation of the project, need to focus on the code itself, because not much time before MVP left. Overall, it is all good.
Weekly contribution: Coordinated team activities and set up Docker containers for all services. Helped with frontend development when a team member was sick, connecting React components to backend APIs. Configured connections between API Gateway, Labs Service, Users Service, ML service, and databases to work with the frontend. Started setting up deployment infrastructure and container management.
- Mikhail Trifonov (Backend):
Closed issues (clickable):
Issue,
Issue,
Issue
Closed PR’s (clickable):
PR,
PR,
PR
Weekly contribution: Created a users service to handle user data separately from authentication. Wrote documentation for the new service and updated existing docs. Reviewed team members’ code and worked on improving backend infrastructure by organizing protocol buffers and validation processes.
- Nikita Maksimenko (Backend):
Closed issues (clickable):
Issue,
Issue,
Issue,
Issue
Closed PR’s (clickable):
PR,
PR,
PR,
PR
Weekly contribution: Developed and implemented a fully functional API Gateway MVP featuring comprehensive JWT validation, RESTful endpoints, and gRPC connectivity for lab and article services. Created and updated service documentation to reflect architectural changes and implemented Swagger API specifications for all endpoints.
- Timur Salakhov (Backend):
Closed issues (clickable):
Issue,
Issue,
Issue,
Issue,
Issue,
Issue,
Issue
Closed PR’s (clickable):
PR,
PR,
PR,
PR
Weekly contribution: Created protocol buffer definitions for articles, labs, and submissions services to standardize communication between microservices. Refactored and improved two backend services with better architecture.
- Ravil Kazeev (Backend):
Closed issues (clickable):
Issue,
Issue,
Issue,
Issue,
Issue
Closed PR’s (clickable):
PR,
PR,
PR,
PR,
PR
Weekly contribution: Developed the feedback service with local deployment setup and API testing using Postman. Focused on service architecture and integration endpoints for communication with other microservices.
- Kirill Shumskiy (ML):
Closed issues (clickable):
Issue,
Issue,
Issue,
Issue
Closed PR’s (clickable):
PR,
PR,
PR,
PR,
PR
Weekly contribution: Implemented chat history functionality with persistent storage and route endpoints for retrieving conversation data. Developed a backend wrapper for the RAG agent and set up vector storage with support for labs division. Refactored vector storage design for better performance and implemented automatic intent retrieval system.
- Aleliya Turushkina (Designer / Frontend):
Figma board (clickable):
link
User Flow diagram (clickable):
link
Closed issues (clickable):
Issue,
Issue
Closed PR’s (clickable):
PR,
PR
Weekly contribution: Designed UI/UX components for the platform including registration pages, navigation sidebar, and top navigation bar. Created page layouts for viewing labs and articles, plus browse pages for discovering content. Built a user profile system with editable profiles, personal dashboard for managing uploaded content, filtering options, and file upload interface.
More detailed descriptions of services can be found by links from project README.md file
(link).
Plan for Next Week #
- Complete test suite with unit, integration, and end-to-end tests
- Functional CI/CD pipeline with GitHub Actions
- Deployed staging environment with current MVP version
- Test coverage reports and quality metrics
- Team health assessment and progress documentation
Feature Enhancement Deliverables:
- Enhanced feedback system with templates and analytics
- Advanced search functionality with filters
- Improved file management capabilities
Documentation and Quality Deliverables:
- Updated API documentation reflecting all current features
- Test documentation and coverage reports
- Deployment and environment setup documentation
Confirmation of the code’s operability #
We confirm that the code in the main branch:
- In working condition.
- Run via docker-compose (or another alternative described in the
README.md).