Though we are constantly working on optimizing all Nemesida WAF components, there is a possibility of an emergency situation. Similar situations include:

  • Emergency shutdown of the filtering node;
  • Slowing down the access the web applications due to increased consumption of hardware resources of the filtering node server;
  • Inability of clients to access the web application due to an increase in the count of legitimate request blocks.

Let’s look at each of these scenarios.

🔗 Emergency shutdown of the filtering node

During Nginx operation, errors may occur, leading to an emergency interruption of the execution of requests, as a result of which Nemesida WAF cannot process the request correctly. To track and correct errors in the code, the process memory image recording analysis (core dump) is used.

If such a situation occurs, you must perform the following actions:

  • Activate monitoring of Nginx process crashes when working with Nemesida WAF;

    Activation manual
    1. Create a directory to store records about core dump:
    # mkdir /var/log/nginx/core_dumps
    # chown root:root /var/log/nginx/core_dumps
    # chmod 1777 /var/log/nginx/core_dumps
    
    2. Remove the limit on the maximum file size with the process memory image:
    # ulimit -c unlimited
    If the command ends with the message «Cannot modify limit: operation not allowed», run the command:
    # sh -c "ulimit -c unlimited && exec su $LOGNAME"
    3. Activate message recording on the system by adding to /etc/sysctl.conf:
    kernel.core_pattern = /var/log/nginx/core_dumps/core.%e.%p
    fs.suid_dumpable = 2
    
    And apply changes:
    # sysctl -p
    
    4. Activate message recording in Nginx:
    4.1 At the beginning of the Nginx configuration file /etc/nginx/nginx.conf add:
    worker_rlimit_core      2G;
    working_directory       /var/log/nginx/core_dumps/;
    
    4.2 Save the changes and restart Nginx:
    # systemctl restart nginx

    Additional features

    Activate writing to the file of requests that are in the memory of the workflow at the time of the error related to core dump by adding the parameter nwaf_coredump_request_path to the configuration file of the dynamic module Nemesida WAF /etc/nginx/nwaf/conf/global/nwaf.conf:
    nwaf_coredump_request_path /var/log/nginx/coredump_requests;
    When using the parameter, each Nginx workflow will be allocated 25 MB of memory for intermediate storage of requests. After filling 75% of the allocated amount of memory for each request is allocated by 64 KB, truncating the request body.
    Activation of registration of records about core dump and requests stored in the memory of the workflow significantly affect the performance of Nginx and generate large files, so it is recommended to disable the functions after receiving the necessary information.

    Analysis of the received data

    To view the collected records about core dump, you need to use the debugger GDB by running the command gdb path/to/nginx -c path/to/core_dumps:
    # gdb /usr/sbin/nginx -c /var/log/nginx/core_dumps/core.XX.YY
    ...
    [New LWP XXXX1]
    [New LWP XXXX2]
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
    Core was generated by `nginx: worker process                   '.
    Program terminated with signal SIGSEGV, Segmentation fault.
    #0  0x00005600c259077a in ngx_radix32tree_find ()
    [Current thread is 1 (Thread 0x7f6ca08e9740 (LWP XXXXX))]
    (gdb)
    
    Execute the command bt (backtrace – output of the function call stack until the error occurs):
    (gdb) bt
    #0  0x00005600c259077a in ngx_radix32tree_find ()
    #1  0x00005600c25ffd84 in ?? ()
    #2  0x00005600c25ce948 in ngx_http_get_indexed_variable ()
    #3  0x00005600c25cf7e6 in ngx_http_script_copy_var_len_code ()
    #4  0x00005600c25cfa9a in ngx_http_complex_value ()
    #5  0x00005600c2603428 in ?? ()
    #6  0x00005600c25ce948 in ngx_http_get_indexed_variable ()
    #7  0x00005600c25cea47 in ngx_http_get_flushed_variable ()
    #8  0x00005600c25d1783 in ngx_http_script_var_code ()
    #9  0x00005600c2604956 in ?? ()
    #10 0x00005600c25be8ec in ngx_http_core_rewrite_phase ()
    #11 0x00005600c25ba0be in ngx_http_core_run_phases ()
    #12 0x00005600c25ba163 in ngx_http_handler ()
    #13 0x00005600c25c4d08 in ngx_http_process_request ()
    #14 0x00005600c25c5284 in ?? ()
    #15 0x00005600c25c55c4 in ?? ()
    #16 0x00005600c25c576f in ?? ()
    #17 0x00005600c25ac12d in ?? ()
    #18 0x00005600c25a2996 in ngx_process_events_and_timers ()
    #19 0x00005600c25aa5d9 in ?? ()
    #20 0x00005600c25a8cd2 in ngx_spawn_process ()
    #21 0x00005600c25a9874 in ?? ()
    #22 0x00005600c25ab100 in ngx_master_process_cycle ()
    #23 0x00005600c2583ae2 in main ()
    (gdb)
    
  • Send the received information to the technical support for its analysis and troubleshooting.

In the case of a critical situation when failures in the operation of the filtering node interfere with the operation of the final web applications, it is allowed to exclude the processing of requests by the filtering node for the time of troubleshooting.

🔗 Slowing down the access the web applications due to increased consumption of hardware resources of the filtering node server

During its operation, the filtering node processes a large number of requests and analysis them with signature analysis and a machine learning module. These operations require a certain amount of server hardware resources. If there is a shortage of hardware resources, access to protected web applications may slow down due to overload in the work of the filtering node processing and proxying traffic. If such a situation occurs, you must perform the following actions:

  • Study recommended hardware requirements for server with corresponding component and, if possible, add the required amount of resources in accordance with the requirements;
  • Send the Nginx web server log to the technical support for analysis and troubleshooting (if the developers confirm the presence of a malfunction leading to increased consumption of server hardware resources) or receive recommendations to reduce the consumption of hardware resources.

In some cases, slower access to the final web application may occur if the filtering node processes requests that transmit large content in the request body (for example, when downloading files). Processing such requests requires a certain amount of hardware resources and time, which can slow down the work of both the filtering node and all web applications whose traffic is proxied through it. If the functionality of the web application supports file transfer, then as a possible solution to the problem, you need to apply parameters in Nemesida WAF Cabinet that exclude checking the contents of the request body.

In some cases, the filtering node may also consume an increased amount of hardware resources (in particular, server RAM) due to processing a large number of requests that are transmitted for additional analysis by the machine learning module. This is due to the fact that the signature analysis of the request additionally performs Base64-decoding of the zones ARGS, BODY, URL, HEADERS, after which detects signs of an attack (signatures), and the request is passed to the Nemesida AI MLA module. In most cases, to solve the problem, it will be enough to activate the option to disable decoding of the corresponding zone in Nemesida WAF Cabinet.

🔗 Inability of clients to access the web application due to an increase in the number of legitimate request blocks

When analyzing requests, the filtering node performs several stages of checks, which include analysis of queries by the signature method and analysis by machine learning. With an increase in the number of legitimate request blocks, it is necessary to find out the reason for this behavior on the part of the Nemesida WAF.

If such a situation occurs, it is recommended to perform the following actions:

  • Switch Nemesida WAF to request processing in monitoring mode (the request is not blocked, but the corresponding events are recorded in the Nemesida WAF logs);
  • Send the Nginx web server log to the technical support for analysis and troubleshooting (if the developers confirm the presence of a malfunction leading to false positives) or receive recommendations to reduce their number.

🔗 Disabling Nemesida WAF

In the event of a critical situation in which it is impossible to continue using the Nemesida WAF without compromising the performance of protected web applications (for example, when long waits for a response from a web application are unacceptable or the filtering node, proxying traffic, does not allow for constant availability of the web application for clients), it is allowed to disable the filtering node until the reasons for such behavior are clarified on the part of Nemesida WAF.

If it is necessary to disable the Nemesida WAF, it is recommended to perform the following actions:

  • Generate the parameters load_module /etc/nginx/modules/ngx_http_waf_module.so; and include /etc/nginx/nwaf/conf/global/*.conf;;
  • Configure an additional server that will work in traffic mirroring to have up-to-date information about attacks on the protected web application, but not affect their performance;
  • Submit to the service technical support Nginx web server log for analysis and troubleshooting (if the developers confirm its presence) or receiving recommendations for further actions.