All of the downloads, form posts, and crawling that you can do with WSP+ 4.0 is performed by an army of Http clients called user agents. To a web server these user agents look just like a normal browser (you can even make the web server believe you are Internet Explorer, Netscape, or even the Google web crawler, but more on that later.) In fact, the agents are not web browsers at all. If you tried to download 5+ pages per second over any length of time with your normal web browser, it would crash pretty quickly. Browsers just arenít built for the rigors of industrial grade automation. WSP+ 4.0 is built on a server grade Http stack called WinHttp. For more information on WinHttp visit:
The WSP+ 4.0 user agents handle sessions, cookies, and redirects just like a normal web browser. Furthermore, the sessions and cookies obtained from one task are passed along to the next task in a package, so once you login to a site in one task you remain logged in until the package completes, or you log off. However, because the agents are not based on your web browser, it does not share the persistent cookies that you may obtain while casually browsing the web. So, every time you run a package it is as if the agents are accessing the site for the first time. From a scalability perspective, that is really a good thing, because it means, for example, that you can log in to a web site with multiple different users without worrying that the page that the agent is seeing is cached or it is confused as to your identity. In essence what it lacks in some convenience it makes up for in the scalability and predictability of the system.