Will the 500,000 limit on database table rows limit the amount of data I can store in Urchin?
The short answer is no, you should be able to load lots of historical data into Urchin for analysis. There are some users who process 25 GB of data on a daily basis. The Urchin documentation states that there is a maximum table size of 500,000 records, but this does not mean that you are limited to 500,000 visits. A record in the database table does not equal a singular hit to the website.
By default, the maximum database size is set to 10,000 records. The global limit can be raised to 60,000 by editing the Process Settings screen in the Urchin console. But increasing the limit over 60,000 can only be achieved by using the “uconf-driver” utility.
If you use uconf-driver to set a higher database limit, please note that there is a hard-coded limit of 500,000 records. However, caution must be used when increasing the database size beyond 60,000 records as it may affect disk space, log processing speed, and report delivery performance. It is strongly recommended that you increase the limit by no more than 25,000 records at a time so that you can find an acceptable compromise which gives you the increased database capacity you need, but still maintains an acceptable level of disk space usage and performance.
Technical note: Urchin’s databases are based on hash table technology. Performance of these databases is good up to around 60,000 to 80,000 records, but increasing the database size beyond this can result in diminished performance. That is why the maximum database size can only bet set to 60,000 records in the web-based admin interface.
This information is from the Urchin help documentation and is available here:http://help.urchin.com/index.cgi?&id=1378