earticle

논문검색

Design of an Efficient Migrating Crawler based on Sitemaps

초록

영어

As the size of web keep on growing, the job of a web crawler has become more cumbersome in covering maximum web. There are enormous numbers of web pages available over the web but out of them a smaller number is available to user. A sitemap is an XML file in which a list of URLs of that particular site is mentioned. The sitemap protocol may play a very important role in covering maximum web. In this paper, the information provided by sitemap protocol is used for the purpose of crawling the quality web pages. With the help of sitemap, web crawlers will maintain their repository up-to-date. It also tries to make user to access maximum pages over the web by covering as maximum as possible while crawling. It also helps crawler to visit the pages based on their change frequency and downloads updated pages only, thereby reduces unnecessary network traffic.

목차

Abstract
 1. Introduction
 2. Related Work
 3. Design of an Efficient Migrating Crawler based on Sitemaps
  3.1 Architecture of Proposed Work
 4. Performance Analysis
 5. Efficiency of Sitemap
  5.1. Web Coverage
  5.2 Preserves Bandwidth
  5.3 Co-operation between Migrating Agents
 6. Conclusion
 References

저자정보

  • Deepika Computer Engineering Department, YMCAUST, Faridabad, India
  • Dr Ashutosh Dixit Computer Engineering Department, YMCAUST, Faridabad, India

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.