Robots txt file is mainly to understand the search engine which pages to index and which pages not to index. The job of the spider is to crawl every web pages up to the website and take necessary steps for the search engine.
In order to exclude the files to no-index in search engine, robots txt file plays a key role
Guidelines: Robots Txt Guidelines
Help Tool: Robots Txt Tester - This Tool will help you check robots file working properly or not.
1) Hide directories: For this type of situation if you don't want to make visible directory lively, just specify the directory name. So that all the files in the directory are completely blocked by search engines.
User-Agent: * Disallow: /admin/
2) Disallow pages: If you want to hide or block particular pages, just specify the path of the page disallows. Once you generate the robots file, just copy the code and paste in the robots.txt file. Upload the robots.txt file in the top root directory of hosting.
User-Agent: * Disallow: /page1 Disallow: /page2
You need to upload robots txt file (robots.txt) in the top root directory of the public HTML file. When search engines crawl the web pages it first finds robots.txt files and takes action accordingly to the direction of the robot file to index pages in search engines.
Check out other tools: XML Sitemap Generator