# Start From Scratch
If you don't have any manual or automated tests yet - follow this guide. If not - pick the right one from this section.
# Create a new project
Register at app.testomat.io (opens new window) and activate your user account. Then create a new project.
When creating a project choose the type of project. If you plan to follow BDD descriptions or use Cucumber framework in the future - choose BDD style. If you are unsure - make a "Classical" project. Depending on your choice an interface will differ a bit.
- Within BDD project you will have feature definitions written both in source code (by engineers) and in the Testomatio (by managers and QAs). So you will be able to track and plan Cucumber automation and synchronize the actual feature files with test cases. When the scenario has changed in the system but not updated in the code - you will be notified that the scenario is out of date.
- In Classical project test cases will be written in free form in markdown format. Automated tests will be synced up with test cases so you will see the test description on one tab and test code on the second tab. When a description of a test case changes you will be notified that a test might need to be updated.
Choose a name for your project and skip the Repository URL. Click "Create" to start a new project.
# Write First Test Cases
Now you can start creating suites & test cases for your projects. For this we have bulk-create input so you could create as many test suites as you have on your mind:
Select a suite to create new test cases inside that suite. You can do that fast using the same Bulk creation editor:
When tests are created they are marked as "manual" - so they are prepared for manual checks.
You can now add descriptions for each test case. However, those test cases are pretty clear by their titles so we can probably try to execute them to verify the user management part.
# Run Manual Tests
When executing run you can choose environment options (list of options can be configured) and title for this run. Optionally a running environment of a run can be set.
When launched you will see a list of all test cases. Mark them as passed or failed. When a check has failed you can write a description of it and attach an image
Once the run was finished the overall result of a Run should be like this:
A more detailed report can be seen when clicking "Report" button