Week #5

Week #5 #

Feedbacks #

  • Feedback collection plan

Our plan involves reaching out to users by contacting them, especially those who are interested in our project. We will schedule one-on-one interviews with key users and show them our project. We will give them the opportunity to investigate, navigate, and utilize the project. Last week, we were making it and now, we are just going to continue our work with the ones who already tested our back end, the MLthe part.

  • Conducted feedback sessions We completed two one-on-one interviews with users who have been using this program last week, and they needed to continue the interviews and say what changes and improvements they have noticed.

  • Analyzing feedback, identifying and prioritizing issues

Our feedback indicated that the design was well-received, and the model performed adequately, according to our friends who tested it. However, a significant issue was identified in the deployment of the UI, as it is currently not accessible on the Internet. This deployment challenge must be addressed urgently to ensure users can interact with the platform seamlessly. Prioritizing the resolution of this issue is critical to moving forward with the project and gathering more comprehensive user feedback.

Roadmap #

Roadmap

  • Short-term Goals (1-2 days):
    • Resolve deployment issues for the UI to make the platform accessible on the internet.
    • Additional testing and validation of the ML model’s performance will be conducted based on initial user feedback.
  • Mid-term Goals (5-6 days):
    • Implement improvements to the user interface based on user testing and feedback.
    • Begin development on offline access functionality to enhance user experience and accessibility.
  • Long-term Goals (1 Week): Scale backend infrastructure to support increased user traffic and ensure reliable performance. Launch marketing initiatives to expand user base and increase platform visibility in relevant communities.

Weekly Progress Report: #

This week, our team implemented asynchronous processing on the ML side, added it to the test branch, and ensured it was functioning correctly. We developed and integrated semantic text chunking, which was also pushed to the test branch. The backend was secured with HTTPS, and we acquired a domain for one month. We tested increasing the backend CPU cores from 4 to 24, resulting in a performance improvement of approximately 6-7 times, excluding async benefits, with the potential to scale to 32 cores for the demo. Additionally, we finalized and tested all website buttons and UI components. These accomplishments have laid a solid foundation for deploying the new UI, completing server optimizations, developing offline access, and continuing to gather and analyze user feedback to guide our development priorities.

Challenges & Solutions #

  • Challenge: Integrating asynchronous processing with existing ML workflows.
    • Solution: We carefully implemented and tested the async processing to ensure compatibility and performance improvements. Early results are promising.
  • Challenge: Ensuring the backend scales effectively with increased CPU cores.
    • Solution: Conducted thorough testing and verified a significant performance boost, ensuring the system can handle higher loads more efficiently.
  • Challenge: Deploying the new user interface.
  • Solution: encountered several deployment issues, such as compatibility with existing systems and unexpected UI bugs. The team is actively debugging and testing to resolve these issues, aiming for a smooth deployment in the upcoming days.

Conclusions & Next Steps #

These accomplishments have set a solid foundation for the upcoming improvements. Our next steps include finalizing and deploying the new user interface, completing server optimizations and monitoring their impact. We will continue to collect and analyze user feedback to guide our development priorities, ensuring we address user concerns promptly and enhance the Studyboost platform’s usability and performance.