Fault tolerance is the site’s ability to remain accessible to the user in the event of technical problems and overloads. In order to make the web resource as resilient as possible, it is necessary to estimate in advance the potential load and ensure server performance.
The role of resiliency in the work and promotion of the site
High resiliency is one of the indicators of the quality of the Internet resource. Fully secure the site from failures is impossible. However, to minimize the number of errors and ensure uninterrupted access to it is quite possible.
Fault tolerance can be measured in percent. The indicator of 100% says that the resource was continuously available for a month. A lower value indicates an outage. The total time when the site was unavailable should normally not exceed two hours per month, and the resiliency indicator should be no less than 99%.
User access to site data
The main purpose of attracting traffic to the site is to inform users about your company or brand, as well as useful actions that visitors can take on the site.
If a potential client comes to the site for the first time, it is especially important to make a good impression. The appearance of errors on the screen and limited access to the content of the page leads to the fact that the user closes it and may not return to the site again.
SEO performance
If the server does not cope with the load, this leads to limited access to the site data. Errors caused by server side problems are marked with a group of 5xx in the system server response codes. In this case, users see on their screen errors in the range of 500-599.
Periodic failures in the site, as a rule, are a variant of the norm. However, the systematic accumulation of 5xx errors often leads to a decrease in the authoritativeness of a web resource in a search engine.
Site indexing and the robot’s processing of problem pages may be at risk. In order not to lose the accumulated rating and visibility in the search, it is important to eliminate technical errors and provide sufficient resources to the platform.
What determines the performance of the site
Site load and server features
Any user interaction with a web resource is a collection of tasks that differ in nature and scope. The load on the site, respectively, leads to a load on the server. If the server cannot cope with the number of requests, this leads to errors and limited access to web pages.
Most site owners at least once received a notification from the hosting provider about the overload of the site. To ensure high fault tolerance, you need to take care of sufficient server resources. The amount of load on the site must be monitored. Having lowered it, you will not have need to pass to other tariff and to overpay.
To provide the site with high resiliency, you need to work with the server, software, create copies of data on several servers – if one server fails, the site continues to work at the expense of another site.
Reasons for reducing site resiliency
- Traffic growth.
If the resource’s popularity has increased, it began to rank higher in the organic issue, many new external links to the site appeared, or additional advertising channels were involved, the growth in the number of visitors can dramatically increase the server load. This often does not fit into the limit, which determines your chosen tariff for hosting services.
- Active site scanning robot.
A large amount of pages and their active search engine processing can significantly increase the load. To avoid this, you should carefully examine which pages are open for indexing, and how appropriate it is. Using the robots.txt file, you should limit access to pages and documents that should be excluded from the index.
- Incorrect work scripts.
For example, a fragment when placed incorrectly, the version of the script is irrelevant or it conflicts with other elements of the site.
- DDoS attacks.
DoS or Denial of Service means “denial of service” and is the result of a hacker attack. Large streams of “junk” traffic are generated on the site server, which subsequently block its operation. In today’s Internet, such cyber attacks are usually carried out using multiple IP addresses or a botnet, and are called DDoS (Distributed Denial of Service).
How to estimate the load on the site
Testing the load and providing adequate server resources will make fault tolerance higher. If the work of the resource is still suspended, then for a very short period of time.
When working with a fault tolerance indicator, it is necessary to evaluate the server’s performance – how long the processing of requests takes and how much it meets the established criteria.
Load testing is an artificially created load on the site and tracking how the system copes with the amount of work. One of the principles of site load testing is the creation of behavioral scenarios and the use of virtual users who simultaneously perform these actions.
There are load testing services as well as special applications for determining the expected load on the system.
How to conduct load testing online
Free online testing services allow you to quickly audit the site. In this case, there is no need to install the application. Just go to the site and enter the URL of your resource in the input field. These services include:
- Onlinewebcheck.com
- Alertra.com
- Webpagetest.org
- Pagescoring.com
- Gtmetrix.com
- Rapid.searchmetrics.com
- Tools.pingdom.com
- Site24x7.com
- Builtwith.com
- Webtoolhub.com
Jmeter load testing
One of the popular ways to determine the expected load on the system and the ability of the server to cope with it is Jmeter load testing. With this application, you can measure the performance of your web project, simulating the possible load on it – the performance of certain tasks by a certain number of users. The results can be viewed in the form of tables, graphs, diagrams.
How to do load testing with Apache JMeter
This tool is free and does not require the installation of special software. It is enough to download the application to your computer, run the resulting file and start loading testing the site.
To obtain the most accurate indicators, it is possible to conduct testing in several stages and derive the average result.
Examples of other tools for site load testing:
- The Grinder;
- WebLOAD RadViews;
- Gatling;
- CloudTest by Soasta.
Conclusion
Fault tolerance is directly related to the load on the site and the ability of the server to cope with the volume of tasks.
Load testing the site and ensuring the appropriate server capacity is the basis for the smooth operation of the resource.
The absence of systematic errors caused by problems on the server side is an important aspect when indexing and promoting a web project.