Quiz Craft

QuizCraft #

Good day! We are the team of “QuizCraft” and we can help you test your knowledge in a specific area.

QuizCraftLogo

QuizCraft is a website where you can find plenty of quizzes or create a new one from your text material. With QuizCraft, you can test your understanding of any topic. Simply provide the material you want to study, and QuizCraft will generate quizzes for you to assess your knowledge.

Using QuizCraft is as easy as this:

  • Step 1: Upload Your Material
  • Step 2: Customize Quiz Settings
  • Step 3: Generate Questions
  • Step 4: Take the Quiz
  • Step 5: Explore or search

Why Choose QuizCraft?

  • Easy-to-Use - QuizCraft provides a user-friendly interface, making it simple to create and take quizzes.
  • Comprehensive Assessment - Generate quizzes from any study material to thoroughly evaluate your understanding.
  • Hassle-Free Learning - Explore a wide range of topics and search for other quizzes anytime. Broader your horizons and become smarter.

Ready to Get Started? Visit website

Also, check out the source code. You are welcome to contribute!

Team Members #

Team MemberTelegram IDEmail Address
Arsen Mutalapov (Lead)system205a.mutalapov@innopolis.university
Nagim IsyanbaevNagim228n.isyanbaev@innopolis.university
Viktor Kovalevblueberry138vi.kovalev@innopolis.university
Kirill Korolevzaqbez39mek.korolev@innopolis.university
Kiril Batyshchevkbatyshchevk.batyshchev@innopolis.university
Gleb KirillovGelbasg.kirillov@innopolis.university

Final presentation #

    / [pdf]

Schematic Drawings #

stateDiagram-v2 You --> LearnignMaterial: have You --> QuizGenerator : refer LearnignMaterial --> QuizGenerator : upload QuizGenerator --> Quiz : compose Quiz --> QuizStorage : save for community Quiz --> You : suggest to pass You --> QuizStorage : examine public quizes Quiz --> YourAccount : save your results

Week 1 #

Value Proposition #

Problem #

The learning material is exhaustive sometimes. If one wants to study it perfectly, he or she needs an assessment. A short quiz is a good choice to do so, but it is not always provided right after the text or somewhere on the web.

Solution description #

A person can put the desired material, which might be large, to the system. After it, the system compose a quiz. The user is happy to check knowledge and answer the questions on the material that he or she has just studied. Moreover, it will be possible to try other quizzes generated by the users with another material on the simillar topic.

Benefits to Users #

All this helps people to figure out whether the material was successfully understood or they should study more. In addition, if a user wants to start learning a new book/article, it is better to pass a generated test first and find out that the material is very easy and not worth studying, for example.

Differentiation #

Firstly, it is barely possible to find an existing solution that provides a set of quizzes to study with at least one desired topic. Secondly, you can always generate your quizz on the specific given material and start passing it right after you visit our website.

User impact #

Generally, the website will represent a hub (a community) where to search for a specific topic to test your knowledge. So, instead of surfing the web, one can pick a desired topic and almost immediately pass a test and figure out the level of knowledge in a specific area. That is simillar to websites with programming tasks. A potential ranking system or statistics on passing the quizzes will help job requiters to separate the candidates as some of them do it now looking at the rank on CodeWars, LeetCode etc.

User Testimonials or Use Cases #

Exams, test, openended questions exist in each and every course. These are the fundamentals of knowledge assessment. To prepare for that students search for the simillar examples to try since the exams are about the course material. Quiz on a specific studying topic is what they want. With quiz hub and auto-generating they won’t need to search a lot. It is very convinient to load the text and answer questions about it until you completely understand the material.

Lean Startup Questionnaire #

  1. What problem or need does your software project address?
    • The knowledge testing is not exhaustive. There is a lack of tests that compose a comprehensive studying
  2. Who are your target users or customers?
    • Mainly, our target users are students and self-learners since we poses ourselves as an educational website.
  3. How will you validate and test your assumptions about the project?
    • We will *check the number of users visiting the website (and their time spent). Also, ask for feedback from the users on the quality of quizzes and overal satisfaction
  4. What metrics will you use to measure the success of your project?
    • Time spent on the site, number of quizzes to try or generated by one user, number of users looked at a single quiz, total number of likes, and dislikes of a quiz and questions individually
  5. How do you plan to iterate and pivot if necessary based on user feedback?
    • We will enhance or add the functionality that is in the most demand of users.

Leveraging AI, Open-Source, and Experts #

  • AI (Artificial Intelligence): Large Language Models will analyze the text and generate the quiz. In addition, information retrieval models will allow search and clusterisation.
  • Open-Source: Open Source Libraries (Such as LangChain or Transformers) will provide the predefined AI models with the functionality around them.
  • Experts in relevant domains: We will search for consultation on how llm models operate and how to tune them for our specific goal properly.

Inviting Other Students #

The potential of our project looks infinite.

  • From the frontend side, it is always good to have more people to provide a pleasant user experience in each parth of the website.
  • Backend model’s generation of quizes quaility is never to be perfect. There may be so many input formats with its according parsing and handling. There is a wide field of training a model to better extract appropriate questions from the text accounting a potentially large context.
  • On the server-side, plenty of microservices might be implemented. Handling subscriptions, business corporation management, search suggestion system, creating a personal learning path, new material generation and creation of courses, as a consequence.

Of course, all this is a future and far ahead of MVP. That’s why yes, we are open for help. Anyone could look at the tracks above and join the team having enough competence.

Defining the Vision for Your Project #

Overview #

The project is a website where a user can create a quiz or find a suitable one. If one needs more evaluation of the knowlendge in a specific topic, he/she cannot do anything but search for a quiz or test or google answers to open-ended questions from somewhere. That is not an easy task. That is why we provide a single place for all the quizes, which are generated on demand. You should not search anymore, just provide the material and here are the questions for you. Answer it or the site will provide the answers, and you’ll be better prepared for your future career.

Tech Stack #

Main language - Python. Our project utilizes machine learning models, which integrates well in Python world. Also, Python is easier to write and work with.

Postgres database is well-known by the team and secure for our purposes.

In addition, Celery, RabbitMQ and Redis are the important addition for some of functionality (such as asyncronous quiz creation)

Frontend - Flutter/Dart with HTML and CSS. It is a compiled language, so that we won’t have issues with the pages deployment. The team expertise is rather about Dart than JavaScript.

Deployment - Docker conatainers. It will help with scalability and performance.

Anticipating Future Problems #

Language based models are not as fast as we want. However, the multithreading may help resolve this issue in a certain manner, still tests have to be applied first. The potential of development is high, so we have to figure out the most essential part to implement first. Yet user experience may suffer if the functionality is complex. We will rethink the development process to simplify something if necessary. Finally, the model may behave badly in general. In this case, we will mostly concentrate on the types of questions it generates well and provide it as a major way for quiz generation.

Elaborate Explanations #

The main functionality is generating questions from the text or its embeddings. To achieve a good result we should think of and strictly define a format of questions, a way of extraction and text splitting, its context refining and handling. In a simple way the text may be split into sections, and the language model will generate few questions within each section. In addition, a model for evaluation the response of this generative model should take some actions. All this will allow to display the desired quiz to the user on the website. The backend for the site is a way to get learning information from the user and its answers to a quiz and send to the language model for evaluation. The uniqueness is achieved by generating from almost scratch without supervising.

Feedback #

Feedback

Value Proposition

Good Explanation. And a nice Idea. But have you consider the size of the material your app can handle. Beside how will you grade the quiz? What will be the format of the questions?

Uses cases stated are not correct. You need to give details how your product will be used and by whom exactly.

Lean startup question

The whole section very short, and you didn’t spent much time considering answering. How will you act on the feedback, how will you iterate on it?

Targeted user:

Students, junior employees

Why?

AI

What ML algorithm/modles will you use?

Vision Of The Project

Good

Overall

The report is good. You need to reflect more on the business and operational side. Beside think more how AI going to help.

4/5

Feedback by Moofiy

Week 2 #

Tech Stack Selection #

  • Backend - Python/Django/DRF. We use the Django because it is easier to start with it for MVP. Moreover, it has a lot of great features out of the box, such as database migrations, admin panel, database management and authorization. Django allows to grow the codebase easily because it has a standard structure for project. DRF will help us to make the Restful API. All this is popular and extensive and has a large community of developers.
  • Frontend - Dart/Flutter. Flutter is a comprehensive UI toolkit that allows to create cross-platform mobile, web, and desktop applications from a single codebase. Several reasons:
    • Performance: Flutter uses the Dart AOT (Ahead-of-Time) compiler to - generate highly optimized native code, resulting in excellent performance and smooth animations.
    • Development efficiency: The single codebase for multiple platforms reduces the development effort and ensures consistent user experiences across devices.
    • Increasing popularity and development: Big companies apply Flutter frameworks in their work and rewrite some modules.
  • ML - Langchain, Transformers/Python. Those are the most popular libraries that include plenty of models and functions to work with. This helps constuctiong a pipeline of input processing, training and testing the models all together and allows handling different types of files.

Architecture Design #

Component Breakdown #

In general, we have 4 major modules: REST server, database, language processing classes, and frontend pages. Splitting them we can have the following components:

  • Authorization service, admin panel. Coupled with a database and login page.
  • Search system, grouping quizzes. Processing on the ml-side and storing in DB.
  • Component with quiz creation: uploading files, setup settings. Web page and REST request.
  • Main view of quiz(es). Frontend view + REST API fetching from db.
  • System to pass quizzes and track the statistics. Backend process the incoming messages while the user clicks on the website.
  • ML pipeline: text splitter, filters, extra text processing, questions generator, answers generator.

Data Management #

The main data storage for our project is PostgreSQL. The data workflow will be the following: 1. (authorization) During the registration process the user will send login and password, the password will be hashed, and then both will be stored in a database; 2. The user sends data for quiz generation to the server, then data is stored in the database and redirected to the ML model, and then the model response is sent back to the user. The backend server provides endpoints to access the data or calls the ml-side with the necessary information.

User Interface (UI) Design #

When designing, we took into account the following: the time to action (should be as small as possible), minimalism, simplicity, and light colors. When a user enters the website, he or she can immediately explore the existing quizzes or start creating of a new one. We picked the blue palette. The user should observe only a few things: a target page, a menu, and a header bar.

Explore page explore page Creation page creation page Search page search page

Integration and APIs #

We may exploit OpenAI API to work with their language models. Additionally, we want to use oauth2 servers for fast authorization.

Scalability and Performance #

Unfortunately, so far we can generate only one quiz at a time so that several users will wait in a queue. In the future, we may deploy several models or use an external system to make the process parallel. However, the major outcome of the project is a quiz base (hub) so that users may explore other solutions while waiting. The functioning of other parts (exploring or quiz passing) is scaled easily and mainly done on the client side.

Security and Privacy #

Firstly, we store only encrypted (using Bcrypt) passwords in a database. Secondly, we will use the https protocol for sensitive data transportation. Thirdly, we do not expose the stored downloaded files to anyone but the user who uploaded them. Finally, we will use CSRF tokens to protect from unwanted redirections from external sources.

Error Handling and Resilience #

  • Error handling: Django’s built-in exception handling mechanisms, such as try-except blocks, to gracefully handle exceptions. Also, we will have custom error responses to provide informative messages to API clients.
  • Reliability: Fault-tolerant strategies using Django middleware to handle common errors. Use techniques like retrying failed requests or circuit breakers to prevent cascading failures and promote graceful error recovery.
  • Logging: We will configure error logging in a Django DRF application, define logging settings in settings.py to capture error messages and stack traces. For example, set up a logger that writes to a file or uses a centralized logging service like ELK.
  • Monitoring: We will integrate a monitoring system like Sentry or Datadog by installing their SDKs and following their documentation. That enables error and exception tracking, performance monitoring, and receiving alerts for critical errors.
  • Prevention: Unit test suite in Django’s testing framework. Cover success and error scenarios, including expected exceptions, to identify and address issues early and ensure accurate error responses.

Deployment and DevOps #

The whole deployment will be in a docker container combining all the part of the project. The pipeline includes compilation of frontend pages, building server code and deploying it and then exposing to the web. All the code is split into modules on github.

Questionnaire #

  • Tech Stack Resources:

    • Book - Django: The Practice of Creating Web Sites with Python by Vladimir Dronov. The knowledge from the book can help the developer to work with any part of development. For example, database management, migrations, working with admin panel and so on
    • Flutter Apprentice (Third Edition): Learn to Build Cross-Platform Apps. This book covers a wide range of topics essential for building apps with Flutter. It provides guidance on using Flutter widgets for UI development, navigating between screens, handling networking and persistence, managing app state, utilizing Dart streams
    • Question Generation using Natural Language processing. We plan to use this course to find and learn NLP techniques for generating questions of various types.
  • Mentorship Support: A graduate student Daniil Arapov helps our ml engineers and consults on the future of development. His advices extend the learning outcome. We believe that he can open our mind for the unknown possible solutions.

  • Exploring Alternative Resources:

  • Identifying Knowledge Gaps:

    • We do not know how to work with large files storage on the server side. We should learn about features of different databases or the ability of PostgreSQL for this purpose. We also have to fill the gap in some Django security handling.
    • Bloc library is new for us. One commonly uses it for state-management in Flutter apps. We hope the found tutorials will help.
    • We are not familliar with the variety of possible ways to generate questions from the language model perspective. We have to figure it out from the articles and courses.
  • Engaging with the Tech Community: We are not engaged yet but will see what is possible to do.

  • Learning Objectives: The way we start the development highly correlates with the success of the project. We are exploring possible solutions, functionality and key components. From the tech perspective we will deepen into the implementation details and complexity of each framework and model by using the resources described above.

  • Sharing Knowledge with Peers: We meet to discuss the report and share the results of each week.

  • How have you leveraged AI: Before searching for the answer to any question, we refer to chatGPT or other large language models to assess the quick solution and then decide whether we take it into consideration or search more.

Team Allocation #

After the project revision at the team meeting, we identified the key responsibilities:

  • Arsen Mutalapov - emphasizing the components of the project, its goals, and objectives. Website design. Control of quality and coupling of the team members. Identifying issues, and questions during the work and raising them in meetings.
  • Viktor Kovalev - Frontend (Flutter). Developing functionality of the website, and pages. View, pass quizzes, generate and set up a quiz, search, and explore the quizzes.
  • Nagim Isyanbaev & Kirill Batyshchev - ML-engineers (Transformers, Langchain). Training and testing language models to extract questions from the text. Composing a pipeline of text processing.
  • Kirill Korolev & Glen Kirillov - Backend. Database handling. Authorization. Passing data to ML models and retrieving the result. REST API for the website. Error handling, monitoring, and logging.

Weekly progress report #

Besides all the given above we:

  • embarked on a research project to explore various existing NLP models and assess their effectiveness in question generation tasks. Despite being relatively unfamiliar with NLP models and the associated libraries, we overcame this challenge by delving into the documentation and watching tutorial videos. Additionally, our mentor, Daniil Arapov, played a vital role by providing us with a valuable Python library that offers a wide range of NLP models. With his guidance, we try to identify the most suitable models to address our specific problem. As a result of our efforts, we developed a temporary working solution capable of generating questions. Although our current solution may not be perfect, we have gained valuable insights and identified areas for improvement. Moving forward, we are now equipped with the knowledge and direction to continue our research and enhance the quality of the questions generated by our system.
  • prepared a simple design of the website described earlier.
  • created an initial architecture and a full-fledged user model for the database. In addition, we added and configured password hashing to the application pipeline.
  • built the base part of architecture and started to write a home page for our website. The main problems were choosing a library for state management and defining the architecture of the application. These problems are related to each other. So, we picked the Bloc library for state management, and with guidelines of Bloc documentation we came up with the architecture.

Week 3 #

Deliverables #

Technical Infrastructure #

We started our week by deploying the templates of each part of the project: backend server with ml classes and functions and frontend pages. We have three separate git repositories to develop locally and independently. So, we tried to compose an efficient workflow in that everybody pushes changes, and we can build it on the virtual machine on demand to test everything working together.

Backend Development #

This week the backend work was focused on data management. However, we also did a login function and answers checking for a user.

Frontend Development #

We started implementing the layouts of the website. So far, we have a form where to enter a title and insert the input text for the quiz to send to generate. Then, we prepared a view of all questions and answers from a retrieved quiz. Finally, we focused on an interactive system to pass a quiz - choose answers and check the score (how many correct answers and where the mistakes are).

Data Management #

We have developed necessary database schemas to store quizzes and questions to them, and user data to log in and register. After it, the backend Django server was connected, and simple endpoints and services to make CRUD operations with quizzes in code were built.

Prototype Testing #

As we deployed it on a local virtual machine, each team member could try it themselves. An obvious issue related to user experience is the necessity to wait while a quiz is being created. Also, some bugs, such as the wrong question format, were noticed. All these problems are of the highest priority for the next week.

Progress report #

You can watch this video to check our current state of affairs. This week we worked with composing the infrastracture and deployment for our simple prototype. So now, we can focus on development of more features and analyzing user experience.

Prototype Features #

  • Quiz creation:
    • The user presses the button “Generate quiz” in the left menu to open the page
    • The user inputs the title
    • Inserts text (no limit so far)
    • Sends and waits.
    • At this time the backend server calls ml functionality (so far, we use chatgpt as a main model to process the text and prompts to compose questions) and blocks until it returns
  • Review:
    • When a quiz is ready it appears on the website. It consists of multiple choice questions and true/false. It is the main intent so that a user could easily and eagerly pass it.
    • A user can see all the questions and answers
  • Passing:
    • Also, the user can complete a quiz themselves
    • choose answers and send them to check
    • get the number of right answers
    • display the correct answers
    • see the total score for the quiz. So far, we count only a full mark or 0 as a point per question.

Challenges and Solutions #

Our main challenge of this week was Deployment:

  • The expertise of the team is not so high to deploy easily, and quickly, knowing possible problems and solutions that appear usually along the way of DevOps. We could effortlessly build/compile a backend web server, frontend application server, and ml models locally. The real problem was how to connect them so that they can communicate on a single virtual machine. There were problems with the installation of requirements (especially pytorch 2.0) and establishing the connection between a frontend page/server and the backend REST API.
  • To resolve these issues, we decided to build and deploy everything separately on the virtual machine (without Docker as it was planned) and installed some requirements manually with special Python 3.10 command - “python3.10 -m pip”.

AI language models are a new feature for us:

  • We had to choose a language model that performs better. There are quite many of them but we were not aware of libraries that can help to utilize. So, we asked our mentor (Daniil Arapov) to recommend some models. We tried: ChatGPT, LangChain, Transformers: T5, Lama, Falcon, Vicuna, and Alpaca. The main difficulty is that they take a lot of RAM, and it is not possible to deploy a complete version on our machines.
  • We tested locally all these models on the different text materials and stopped on ChatGPT (since it does not require space as we use API, and it outperforms other models as it is a more general language model).

Frontend:

  • We wanted to compile the pages and serve them from the backend. However, we didn’t know how to do it properly in Django. It seems that Django is not supposed to do so. That’s why we stayed with the default way - serving pages from the Flatter server.

Next Steps #

The number one priority is to fix bugs in our current prototype. Those are:

  • Remove the necessity to stay on the website when the quiz is being created.
  • Unhandled errors. If the backend server or models do not respond, a cute message on the front end should appear (“Try later”). So far, we do not care about the stability, and it makes some difficulties while testing.
  • A poor code quality. We should spend some time refactoring the code and think about its extensibility to reduce Technical debt.
  • Unknown crash of a program on a virtual machine. We should identify the problem in case it is related to the code and not to the machine itself.
  • Differentiate between questions with multiple correct answers and a single right answer.

After that, we will implement the most used features (listed from highest priority):

  • Attach files (not only text) to create a quiz
  • Register and log in to review or pass all own quizzes
  • Specify the maximum number of questions to be generated. We will discuss it more in the future. It might affect the time of generation, but we should consider the quality as well. It depends on the performance of the language models. So, we have to test first.
  • Search for quizzes that are created by other users. For this, we should think about the parameters we might consider. For example, keywords, the topic of a material or a quiz, title, and description. Also, some quizzes should be private if a user wants.
  • Display statistics on quiz passing. How many quizzes were passed, the total number of questions answered, the average grade in percent, and how many correct and incorrect questions, the time to pass. It may look similar to LeetCode statistics.
  • Think of the criteria of quiz differentiation. For example, count time when passing a quiz. It would be better to have quizzes of different difficulties: Easy, Medium, and Hard.
  • Quiz generation optimization and scale. We have a problem that it is not possible to create two quizzes simultaneously. Users must wait like in a queue.
  • Group quizzes in packs to suggest for a user to try. Also, suggest similar quizzes after passing anyone.
  • Receive feedback from a user. It might be: likes or displikes of each question and a quiz in particular, a feedback form to fill and send.

Feedback #

Feedback

Prototype Features
Very good how you wrote the features. It’s better to describe them through user stories

User Interface
Any progress?

Challenges and Solutions
Good, you know what you are facing! You should put a plan how to face all the challenges

Next Steps
Good

Overall
Very good report. Looking forward to see your results

Grade
5/5

Feedback by Moofiy

Week 4 #

External Feedback #

Since we don’t have much functionality in our project, the feedback is not exhaustive. The people who said that they might use our website, tell us about the necessary features, which we are aware of. They said that we should improve our design a bit so that others won’t be afraid of the system. Also, one by looking at the search bar (which we did not implement yet) suggested to let a user to decide if the newly generated quiz will be public or private, since some learning material should not be exposed.

Testing #

It is well stated that a thorough testing should go after we have a good number of features to be tested. So far, we have only the main one - quiz generation workflow. However, since we deployed it locally on the virtual machine - every team member can test it individually at any time. This is what we do on our almost daily meetings. Also, when a team member deploys a feature, he tests it several times to make sure all the basic cases work fine at least locally.

Iteration #

This week we focused more on the iteration phase so that we could have at least half of the MVP. Our initial goals were quiz generation and searching for already generated ones. Comparing the current progress with the goals, we state that almost half of them are completed. We allow generating a quiz with the material attached, but there is no search. That’s why we started learning how to extract quizzes by a search query. Our first iteration was about the ability to input a raw text and give a title to a new quiz, and then pass it or view only one time. This week we focused on the refinement of the previous features (better frontend appearance, stability of work, and a bit more developed AI backend functionality to allow developing features on the Python backend) and displaying all user’s quizzes. For this, we had to implement authorization as well. The next iteration will be about information retrieval and search by user’s query, along with testing the website and improving user experience on the frontend page.

Progress Report #

You can watch this video to check our current state of affairs.

  • We reformated and optimized some Ml functionality. For example, we changed a format of a quiz a bit so that we could convert it in simple text for a future search by a query and clasterization. In addition, we explored github repositories on work with gpt models. However, we didn’t find any free models or anything that might help us.
  • During these weeks we also investigated other possible models. We tried to finetune GPT3(davinci) model but gave up since it is paid. Constructed finetuning pipeline for open-source LLMs (such as Falcon and Alpaca). The problem we faced was a lack of computational resources for completing the training. That’s why we could only downgrade the models to lower number of parameters, quantize them.
  • We understood that developing own ml models is not a feasible task with our expertise level in 7 weeks. That’s why we stopped on chatgpt and its prompt engineering and tuning.
  • The frontend part turned up to be time-consuming work, so we won’t manage to have all the desired pages that are a bit beyond the MVP. That’s why we limited the project to generating, authorization, viewing all quizzes, passing, and searching. That is how we narrow the scope of our project and ensure feasibility. This week we implemented a login and view of all quizzes. It was not clear initially how to manage the sequrity of pages. We solved it by storing the token in a local storage and then verifying each time a user enter any page.
  • From the backend perspective, we reformated most parts of the code to improve maintainability and reduce technical debt, as we planned. Besides, we added displaying of all user’s generated quizzes and made quiz creation asynchronous. The only difficulty we had met was related to the last feature. To be more specific, we had a problem with running Celery and RabbitMQ (message broker) on Windows (our local development environment) in pair. Therefore, we switched OS to Linux-based Ubuntu to test it more comfortably.

Feedback #

Feedback

External Feedback
So you didn’t get a good user feedback? How many user you talked to?

Testing
How do you do testing exactly? You didn’t show how!!

Iteration
Ok, this good that you are thinking of the iteration also keep in mind:

An iteration plan, is essentially the plan for an upcoming iteration. It would typically outline:

  • The goals and objectives for the iteration: what the team aims to achieve.
  • The features to be developed
  • The tasks needed to develop these features: this might include coding, testing, design tasks, etc.
  • Any assumptions or dependencies.
  • A timeline for the iteration

Overall
The report is ok, good progress so far. Your design is too basic and lack following UI standards and doesn’t utilize colors correctly.

Tokyo ghoul song is a very cool song. But I would not advise you to add it in your demo. Maybe instead of music you could narrate to the watcher how to use the app.

Grade
4/5

Feedback by Moofiy

Week 5 #

Last week we had much less functionality than we have at the end of week 5. That’s why last time we could not get much feedback on the website. During this week we asked our fellow first year bachelor students (our subset of target audience) as well as several 2nd and some of 4th year students to fill a feedback form. This form is our primarly systematic and organized way to measure the general success of the different field of our project: usability, functionality, user interface, performance, satisfaction.

Feedback #

Meetings #

We asked the same questions to students of our university (our collegues) while they were testing the website during the meeting. Overall, we asked about 10 people, and here are the main outcomes:

  1. Did you find the generated quizzes helpful in assessing your knowledge?

    • The majority said that it is helpful. However, they wanted also to consider other existing solutions and try the same with chatgpt.
    • Some questions they found ambiguous or confusing.
    • Since we have all questions having multiple right answers they didn’t figure it out at the start.
  2. How satisfied are you with the overall experience of Quiz Craft?

    • They found the platform easy to navigate and the quiz generation process straightforward.
    • 3 students mentioned some minor issues related to page loading speed, especially when logging, and occasional glitches when switching.
    • Even though the functionality is not exhaustive, some of them like minimalism.
  3. Would you recommend Quiz Craft to others? Why or why not?

    • Students liked the idea of having a centralized hub of generated quizzes so that they could pass differnet quizzes with friends.
    • One student mentioned that they would recommend Quiz Craft with some reservations, as they felt that the quality of the generated quizzes could be further improved.
  4. Is there any additional functionality or feature you would like to see in Quiz Craft?

    • The most common request from the students was the inclusion of a feature that provides explanations or feedback for each question after completing a quiz.
    • Also, the system of multiple-choice questions should be split into single choice and multiple-choice.
    • Some students suggested adding a timer to simulate exam conditions.

Survey #

We focused on the following areas:

  1. Usability

  2. Functionality

  3. UI

  4. Performance

  5. Satistfaction

global

According to the responses of a form:

  • People don’t like to pay for a product. This fact is not significant so far, but it may happen to be in the future. So, we should think of ways to either put advertisements or more premium feature
  • Even though the design is not our primary objective. It hurts more than other areas. The average rate for the design is the lowest one.
  • The time of quiz creation plays an important role. This grade might be lower if we do not suggest checking other quizes when yours is generating.

Summary #

Speaking about a common reaction, it was as follows: it’s good, but not so much functionality and explanations. In general, people expect more from the website than we have with our MVP.

  • We decided to develop at least one page where we introduce ourselves and state the steps to use our product. We will do it next week. That was the most feasible and, at the same time, impactful feature to implement within the remaining time.
  • In addition, we will add a bit more error messages to enhance the user experience when they interact unpredictably.
  • We should notify a user when he/she starts to create one more quiz while they have one currently generating.

Progress report: #

  • Towards project goals:
    • Searching for quizzes by a query. We had to find a way and a machine learning model to compare the query with each quiz and find the best suitable ones. We accomplished it with FAISS library and OpeanAiEmbeddings to convert quiz text.
    • Explore page. We added some content that users might find interesting. There they can see the top viewed quizzes, most passed, recently created by others or last viewed by the user. This task was not so complicated since we prepared for it in advance: database tables to track activity were set up, and frontend functionality to display a list of quizzes fetched from the backend API was developed earlier.
  • Based only on the feedback:
    • Frontend: refine the design of the quiz creation form. Make the field (especially for title and description) smaller and more compact to observe and fill.
    • ML: chatgpt generates description for the quizes. This allows users to quickly understand what a quiz is about before viewing it.

Feedback #

Feedback

Collecting and documenting feedback
good!

Feedback analysis
good!

Roadmap and enhancement
Missing

Grade: 4.5/5

Feedback by Moofiy

Week 6 #

Polishing the website #

Frontend: #

  • Increased font
  • Centered some components
  • Shrinked the length of fields
  • Added explore and logout buttons to a header
  • Added landing page with some instructions
  • Explore page is populated
  • Handle errors when registering
  • Icons for public and private own quizzes

Backend: #

  • Enhance question evaluation to proportional (instead of binary)
  • Add explore metrics such as views and passes
  • Added history of own views
  • display all quizzes

AI: #

  • Comprehensive description of quizzes is created
  • Search is integrated successfully
  • Fixed bugs with quiz desccription and answers