Menu
   ▶ F.A.Q.  
   Read: Licensing  

StreamCatcher: F.A.Q.

I am running many active sites. How do I know that installing StreamCatcher as an overall filter won't break my site?

Unless you override the default setting, StreamCatcher, when installed, will run only on localhost. Yet, if you register it, you can set up your filtering rules and test thoroughly using a remote interface so that you can be entirely confident in the processing power before you activate filtering for your site users. StreamCatcher's testing interface accepts data copied from a spreadsheet, where you plan out your tests (input, expected result...). A sample test file is included, and the testing process is described in the owner's manual.

How is StreamCatcher different from the Coolness Layer, also published by HREF?

StreamCatcher's configuration may be modified through a web browser, whereas Coolness uses a Windows configuration utility which runs on the server. StreamCatcher provides some traffic analysis (of user agents, and robot requests); Coolness only handles URL remapping. StreamCatcher provides separate rules for human requests versus robot requests; Coolness treats all requests the same way.

For WebHub users, StreamCatcher requires the itemization of active AppIDs and is then able to treat all requests that start with a valid AppID as being dynamic requests. It therefore does not require you to itemize all directories with fixed resources (gifs, jpgs, etc.), only those that you wish to be browsable. Coolness requires the itemization only of the default WebAppID for the domain group, and requires the itemization of all directories with fixed resources. This distinction is subtle, but very important for anyone converting from Coolness to StreamCatcher. For example, given a request for http://href.com/pub/relnotes/afile.txt, where pub was not declared as anything to either filter, StreamCatcher would pass the request through unchanged and Coolness would translate to http://href.com/scripts/runisa.dll?pub:relnotes::afile.txt. This is neither good nor bad; it is simply a different way of approaching the question of how to set up the default processing rules.

What is a good starter list for web robots for use in the master configuration file?

Please see: webrobotlist.txt
FAQ updated 28-February-2008.
Running: WebHub-v3.287 compiled with d29_win64 on Microsoft-IIS/10.0,
Sun, 10 Nov 2024 19:48:28 UTC
Session 1508213407, 0 pages sent to Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com) at 18.220.116.34;
Time to produce this page: 0msec.