The robots.txt file is a really simple text file. Its main function is to prevent some search engine crawlers like Google from crawling and indexing content on a website for SEO. If you're not sure if your website or your client's website has a robots.txt file, it's easy to check: just type example.com/robots.txt. You will find either an error page or a plain format page.
There's different ways to create a robots.txt file. You can create it from your: Content Management System, Computer after which you upload it through your web server, Manual build and upload to Webserver.
The first file that the search engine bots look at is the text file of the robot, if it is not found, then there is a high probability that the crawlers will not index all the pages of your site. This tiny file can be changed later when you add more pages with a little instructions, but make sure you don't add the master page to the disallow directive. Google runs on a crawl budget; this budget is based on the scan limit.
A sitemap is vital for all websites as it contains useful information for search engines. The sitemap tells the bots how often you update your website, what content your site provides. Its main purpose is to notify search engines of all the pages on your site that need to be crawled, while the robotics text file is for search robots. It tells the crawlers which page to crawl and which not to. The sitemap is necessary for your site to be indexed, but the txt of the robot is not (if you do not have pages that do not need to be indexed).