Pages Navigation Menu

Technologies Are Made To Serve Human Being




We have a great deal of experience and expertise in managing operating systems, network infrastructure, application servers, and databases.

We have been hosting .Net, WordPress, Joomla applications since 2011. During that time, we have had to adapt to changing technology and Customer needs. In doing so, we have had to blaze our own hosting trail.

Hosting is complex and there are a lot of components to it. The ones we use are described below. You will find it well worth your while looking at them and studying them thoroughly when choosing hosting services for your system:

  • Sharing — should the components be exclusively available?
  • Scale — how many resources, if any, do I need (e.g. disk array)?
  • Scalability — how quickly can the system requirements change?

In the case of a business-critical system with a heavy workload, the hosting environment will probably require the full spectrum of available resources. For a business support application, sharing resources may suffice. This would give you all the advanced security mechanisms and high availability without having to invest in a dedicated infrastructure.

Secure Colocation Center

Our secure colocation center is a building that securely houses servers and other supporting devices:

  • Resources and environmental parameters necessary for operation, i.e., appropriate temperature and continuous power supply, and
  • Physical security, i.e. protection against intrusion and natural disasters such as fire or flood.

More information on this topic can be found in the Security section.

Internet Connection

The Internet connection must ensure fast and continuous user access to the application. In order to meet these requirements, the hosting center where the system is located needs to be connected to many local, national and foreign service providers via multiple connections. Depending on your customer base, the traffic may be mostly domestic or mostly international. This aspect should be considered at the design stage, as it affects the service price.

Network Infrastructure

Network infrastructure is responsible for efficient communication between the Internet and your system, and between the system components, e.g. the application server and the database. The network infrastructure also comprises routers and network switches. In the case of systems developed by e-point, routers are part of the hosting center infrastructure and are shared by various Customer systems. Network switches are crucial components that help organize other segments of the infrastructure. These components may be grouped in virtual LANs (VLANs). This makes it possible to control traffic between servers connected to the same switch at the firewall level.


Firewalls protect the system from unwanted network traffic and implement the network traffic policy, i.e. they restrict communication between servers to strictly specified protocols. Firewalls hide the network topology details from the outside world, perform network address translation (NAT), and can prioritize selected traffic. Firewalls also support VPN channels when there are no dedicated VPN hubs. A firewall can also assist with intricate filtering, e.g., by restricting administrator panel access to specified networks, such as Customer headquarters.


Servers must meet Customer and user expectations regarding application responsiveness and, even more important for Web systems, provide a high degree of concurrency (serving many users in parallel without any significant decrease in system usability). In the case of Java Enterprise systems, having enough memory is crucial, as Java Virtual Machines (JVM) store user sessions (including copies from other servers in the cluster) in memory and allocate large chunks of RAM for every HTTP request. It is vital that garbage collection does not become the processor’s main task.

Operating Systems

The operating system serves as a bridge between the physical architecture and the network, and between the middleware and the application. The operating system receives network packets and delivers them to individual programs, supports communication with local disks and storage, and is responsible for efficient disk space management. Finally, the operating system executes program code, e.g. the application server and the database.

There are several operating systems available. They differ in how efficiently they manage network support and disk operations. These differences determine which system will be chosen for a particular task. At e-point, we prefer open operating systems derived from BSD, and commercial and free Linux distributions. We can select the right system, whatever the purpose, thanks to our experience. We do not avoid proprietary systems if they perform particular tasks well. For example, IBM AIX is an ideal support for SAN and DB2 databases (it obviously helps to have a single manufacturer). e-point also uses Microsoft operating systems when projects require software that does not run on UNIX or its derivatives. However, high levels of security and availability can be achieved through ancillary programs and a strict security policy.

We prefer free Linux distributions, because they are in no way inferior to their commercial counterparts and the Customer incurs no license costs. Nevertheless, a commercially supported distribution is necessary when manufacturer support for middleware is required or when a driver is only available for selected platforms. We recommend RedHat Linux in these situations.


Middleware is software necessary to run large and small Java Enterprise. Middleware usually has a Web server, an application server and a database.

Web Server

A Web server is the first layer that receives traffic from the Internet for an application. We use Apache software, the most popular, stable, and feature-rich open Web server. Apache integrates perfectly with any application server, and supports the following:

  • SSL encryption with server certificate or client certificate if additional security is required
  • Communication with an application server via appropriate plug-ins; this helps relieve the application server of the usual connection problems in order to optimize its resource utilization

Cache memory for selected publicly available resources.

Application Server

An application server, together with an application, is the central point of the hosting environment. The application deployed on the application server is a master component for the ones that handle network traffic, protect the application, and store its data. Here at e-point, we have been using both Open Source and commercial application servers, such as IBM WebSphere and Oracle Application Server, for many years.

We have been following server-side Java technology since its inception. The Apache Tomcat application server was introduced in 1999. Our first project to use the technology that later came to be called J2EE was put into production in the summer of 2000 (read here about Java Enterprise beginnings). We still use Tomcat today — not as a standalone server, but as a servlet container embedded in JBoss or the lesser known but surprisingly good (simple, fast and stable) JOnAS J2EE application server.

For our largest, business-critical projects, we use IBM WebSphere. We started using WAS with version 4 and have been upgrading to newer releases ever since. We have a lot of expertise in large cluster deployments of WebSphere (Network Deployment edition), including its operation, configuration, automation, ensuring stability, version migration, parallel versions support and monitoring.

Our deployment of WebSphere in our largest Amway Online project has demonstrated the versatility of Java Enterprise, and shown how the Write Once, Run Anywhere principle works in practice. A single cluster has been covering an Intel server running Linux and an IBM POWER5 server running AIX for some time now. The application has not required any changes due to the dissimilar hardware infrastructure and operating systems. We also tried an experiment, adding other members to the cluster, this time from a node running the Windows operating system. Not even this heterogeneous environment managed to cause any surprises; the cluster was stable, and the application ran flawlessly.


The database is one of the most important components of our hosting environment. We believe that there is no better storage concept than the relational database. Current Java Enterprise systems often fail to exploit the full potential of relational databases. This observation is based on experience gained while working on projects for our Customers. We have developed our own O/R mapping mechanism between the Java Enterprise application and the database. The outcome of our experiences is the OneWebSQL library, which we decided to release as a standalone product.

We use Open Source and commercial database products, similarly to application servers. We recommend PostgreSQL, which offers great SQL standards compliance and has many of the features found in proprietary products, such as query analysis support, tuning capabilities, monitoring, physical data allocation management, and (since version 9.1) synchronous data replication.

We also use commercial products such as Oracle and IBM DB2. Choosing DB2 for the Amway project and the resulting challenges we had to overcome made us real DB2 gurus. We can build and maintain a high performance and available architecture based on DB2 and IBM pSeries server with SAN and disk arrays that can process thousands of queries per second.

It is worth mentioning that a system does not always require expensive, sophisticated products such as proprietary databases, SAN and disk arrays. Even with critical, heavily loaded systems, the tuning capabilities of modern Open Source databases, the high performance of RAID controllers and local disks in servers, and multi-level data caching mean that we can calculate Customer requirements and costs and find a solution that satisfies both project requirements and budgetary constraints.

Monitoring and Prevention

Running an application in a selected hosting environment is the beginning of the next lifecycle stage, maintenance. This stage requires close collaboration between the programming and administration teams. Details on the role of the programmers in application maintenance can be found here, but the role of network and system administrators demands further explanation.

Nothing lasts forever, so we do not ask whether a failure can occur, but when it will. We use three mechanisms to prevent the system from being unavailable:

  • A proactive approach to system failures. We frequently review system logs, resource utilization levels and trends, and continuously monitor the utilization of selected resources (Warning and Alert levels) in order to anticipate and prevent them. This allows us to quickly:
    • Add more resources — memory, disk space, processors, connections etc.;
    • Change the cluster architecture, e.g. by vertical scaling;
    • Replace hardware in response to initial failure symptoms, perform a thorough diagnosis, or contact the manufacturer.
  • Avoiding unavailability. Should a failure occur despite our proactive measures, we make sure the system remains available. To this end, we deploy a resilient architecture, usually by introducing redundancy at all levels.
  • Failure detection. The whole hosting environment, including the application, is monitored 24/7/365.25 (even on 29 February). Whatever the time of day, our administrators will be onto it within minutes of being notified. If the error affects an application in a critical system, then, on the Customer’s request, our programmers will also be on call to start fixing it anytime.

Monitoring serves three purposes:
  • It monitors the availability of the application on the Internet, including the appropriate functioning of critical business functions, such as order submission. These data are then used to monitor the SLA (Service Level Agreement), i.e., the availability of the application stipulated in the contract and the performance levels guaranteed by e-point.
  • It monitors the availability of all hosting environment components, both essential elements (server, switch, link, router etc.) and software, e.g., Web server, application server, and database. We also monitor components outside the scope of our responsibility, if they affect system availability, such as SSL certificate expiry dates and correctness of DNS entries.
  • It monitors and collects data on the utilization of all resources, from a single switch port and firewall state, disk space utilization, processors, I/O and memory through connection pool utilization, number of processes, JVM heap garbage collection and cache utilization. This information is necessary to regularly analyze resource utilization and proactively handle failures.


All our systems, even the simplest ones, are covered by a backup policy. However, different systems are backed up in different ways. These differences stem from the importance and volume (amount) of the data. The system may need a dedicated backup system with a petabyte-level tape library or it may need nothing more than a simple disk backup.

When Customer expectations are high, we provide complex systems, including dedicated backup servers in two separate physical locations, data stored in tape libraries, and data replication between locations. All this is controlled by IBM Tivoli Storage Manager.
When the data volume is not so high and the system not so critical, we provide a disk backup service with replication to a remote server room, controlled by Open Source software such as Amanda or Bacula.

However, it should be noted that performing backups only does half the job. The proof of a successful backup service lies in successful data recovery. That is why we routinely test our backup subsystem, restoring data from the main location and (even more challenging) the auxiliary location with simulated unavailability of the main backup center. This approach gives us confidence that data can be restored from a backup copy when needed.


Modern systems are virtually always integrated with other systems. Apart from having to choose techniques and protocols, integration also involves hosting. We tailor our solutions to Customer requirements. Integration is usually performed via a VPN, although we sometimes use a dedicated connection with a guaranteed bandwidth. VPNs and connections become additional hosting environment components that require monitoring, have to run 24/7, and need to have decisions made on their acceptable risk of unavailability. For critical systems, we recommend maintaining a backup connection. Another popular integration method uses a public connection (without a VPN) via an SSL-encrypted channel with Customer authentication.