This section discusses both Mode 1 and Mode 2 configuration in Microsoft Cluster Server. Sites using Microsoft Failover Cluster Manager, Veritas Cluster Server or Novell Cluster Services should jump to the following sections.
This section assumes that you have an already installed and working clustered printing environment.
The PaperCut Print Provider is the component that integrates with the print spooler service and provides information about the print events to the PaperCut Application Server. At a minimum, in a cluster environment, the Print Provider component needs to be included and managed within the cluster group. The Application Server component (The Standard Install option in the installer) is set up on an external server outside the cluster. Each node in the cluster is configured to report back to the single application server using XML web services over TCP/IP.
Install the Application Server component (Standard Install option) on your nominated system. This system will be responsible for providing PaperCut NG's web based interface and storing data. In most cases this system will not host any printers and is dedicated to the roll of hosting the PaperCut Application Server. It may be one of the nodes in the cluster; however a separate system outside the cluster is generally recommended. An existing domain controller, member server or file server will suffice.
The Print Provider component needs to be separately installed on each node involved in the print spooler cluster. This is done by selecting the Secondary Print Server option in the installer. Follow the secondary server set up notes as detailed in Chapter 14, Configuring Secondary Print Servers and Locally Attached Printers . Take care to define the correct name or IP address of the nominated application server set up in step 1.
By default the Print Provider component is installed under the management of the node.
To hand over management to the cluster, the service start-up type needs to be set to manual. On
each node navigate to → → , locate the
PaperCut Print Provider
service. Stop the service
and set the start-up type to Manual. Repeat for each node in the cluster.
Open the Cluster Administrator.
Right-click on the cluster group hosting the spooler service and select
→ .
In the new resource wizard, enter a name of PaperCut Print Provider
and select a resource type of Generic Service. Click .
Click Possible Owners.
at
Ensure that the Print Spooler Service
resource is set as a
required dependency, then click .
On the Generic Service Parameters page, enter a service name of
PCPrintProvider
and ensure the Use Network Name for computer name
option is checked. Click .
Click Registry Replication page.
at the
To ensure the state of jobs currently active (e.g. held in a hold/release queue) are not lost during a failover event, PaperCut NG is able to save job state in a shared drive/directory. If a shared disk resource is available and can be added to the cluster resource, PaperCut NG can use this to host a shared spool directory to ensure no active job state is lost.
Add a shared drive to the cluster resource. e.g. (Q: drive). It is advisable to use the same drive as used for the shared print spool directory.
Create a directory in this drive called PaperCut NGSpool
Create a sub-directory in PaperCut NGSpool
called
activejobs
On each node, edit the file:
[app-path]/providers/print/win/print-provider.conf
and add a line pointing to the shared active job spool directory:
ActiveJobsSpoolDir=Q:\PaperCut NG\Spool\activejobs\
Change the drive letter as appropriate.
If running an "Active-Active" print cluster you must use a separate active job spool directory
for each node. To configure this use the %service-name%
value in the
ActiveJobsSpoolDir
setting. The %service-name%
value is replaced
by the service name of the running PaperCut Print Provider instance.
e.g.
ActiveJobsSpoolDir=Q:\PaperCut NG\Spool\%service-name%\activejobs\
Perform operations to verify that:
Print jobs log as expected.
No error message appear in the Print Providers text log located at:
C:\Program Files\PaperCut NG\providers\print\win\print-provider.log
on each node.
On large networks it is common to distribute load by hosting print spooler services under two or more virtual servers. For example, two virtual servers may each host half of the organization's printers and hence sharing the load. This is sometimes referred to as Active/Active clustering - all be it not an entirely correct term, as the print spooler is still running in Active/Passive.
Virtual servers cannot share the same service on any given
node. For this reason if the virtual servers share nodes, you'll need to manually install
the PaperCut Print Provider
service a second time under a different name.
This can be done via the command line as follows:
cd C:\Program Files\PaperCut NG\providers\print\win pc-print.exe PCPrintProvider2 /install
The argument proceeding /install
is the unique name to assign to the service.
The recommended procedure is to suffix the standard service name with a sequential number.
Mode 2 implements failover clustering at all of PaperCut NG's Service Oriented Architecture software layers, including:
Clustering at the Print monitoring layer
Clustering at the Application Server layer
Optional clustering at the database layer
Mode 2 builds upon Mode 1 by introducing failover (Active/Passive) clustering in the Application Server layer. This involves having an instance of the application server on each of the cluster nodes. When one node fails, the other automatically takes over the operation. Both instances use a share data source in the form of an external database (see Chapter 19, Deployment on an External Database (RDBMS)). Large sites should consider using a clustered database such as Microsoft SQL Server.
This section assumes that you have an already installed and working clustered printing environment.
On one of the cluster's nodes, install the PaperCut Application Server component by selecting the Standard Install option in the installer. Follow the setup wizard and complete the process of importing all users into the system.
The system needs to be configured to use an external database as this database will be
shared between both instances of the application server. Convert the system over
to the required external database by following the procedure detailed in Chapter 19, Deployment on an External Database (RDBMS).
The database may be hosted on another system, or inside a cluster. As per the
external database setup notes, reference the database server by IP address by
entering the appropriate connection string in the server.properties
file.
By default the PaperCut Application Server
component is installed
under the management of the node. It needs to be managed inside the cluster, so the service's
start-up type should be set to manual. On each node navigate to
→ →
locate the PaperCut Application Server.
Stop the service and set its start-up type to Manual.
Repeat this on both nodes.
The PaperCut Application Server
should be designated to run inside
its own cluster group. Create a new cluster group containing the two nodes. Add an
IP Resource and a Network Name resource. Give
the network name resource an appropriate title such as PCAppSrv
.
The need for a new cluster group is not hard and fast. It is however recommended as it gives the most flexibility in terms of load balancing and minimizes the potential for conflicts.
Open the Cluster Administrator.
Right-click on the cluster group hosting the spooler service and select
→ .
In the new resource wizard, enter a name of
PaperCut Application Server
and select a resource type of
Generic Service. Click .
Click Possible Owners page.
atClick Dependency page.
at
On the Generic Service Parameters page, enter a service name of
PCAppServer
and ensure the Use Network Name for computer name
option is checked. Click .
Click Registry Replication page.
at the
Right-click on the cluster group and select Bring online. Wait until the application server has started, then verify that you can access the system by pointing a web browser to :
http://[Virtual Server Name]:9191/admin
Login, and perform some tasks such as basic user management and User/Group Synchronization to verify the system works as expected.
Interface the PaperCut Print Provider layer with the clustered spooler service by following the same setup notes as described for Mode 1. The exception being that the IP address of the application server will be the IP address assigned to the Virtual Server assigned in step 5.
The client and release station programs are located in the directories:
[app-path]/client/
[app-path]/release/
These directories contain configuration files that instruct the client to the whereabouts of the server. The IP address and the server name in the following set of files will need to be updated to the Virtual Server's details (Name and IP address):
[app-path]/client/win/config.properties
[app-path]/client/linux/config.properties
[app-path]/client/mac/PCClient.app/Contents/Resources/config.properties
[app-path]/release/connection.properties
Edit the files using Notepad or equivalent and repeat this for each node. Also see the section called “Client/Workstation Configuration”.
Mode 2 setup is about as complex as it gets! Take some time to verify all is working and that PaperCut NG is tracking printing on all printers and all virtual servers.
It is possible to split the two application layers (Resources) into two separate Cluster Groups:
Group 1: Containing only the PaperCut Application Server
service.
Group 2: Containing the PaperCut Print Provider
and
Print Spooler
services. These services are dependent and hence must
be hosted in the same group.
Separating these resources into to groups allows you to set up different node affinities so the two groups usually run on separate physical nodes during normal operation. The advantage is that the load is spread further across the systems and a failure in one group will not necessarily fail-over the other.
To make this change after setting up the single group Mode 2 configuration:
Change the ApplicationServer=
option in
[app-path]/providers/print/win/print-provider.conf
on each physical node to the IP or DNS name of the virtual server.
Create a new group called PaperCut Application Server Group
.
Set the Preferred owners of each group to different physical nodes.
Restart or bring on line each group, and independently test operation and operation after fail-over.
Take some time to simulate node failure. Monitoring may stop for a few seconds while the passive server takes over the role. Simulating node failure is the best way to ensure both sides of the Active/Passive setup is configured correctly.
It is important that the version of PaperCut NG running on each node is identical. Ensure that any version updates are applied to all nodes so versions are kept in sync.
The PaperCut NG installation sets up a read-only share exposing client software to network users. If your organization is using the zero-install deployment method, the files in this share will be accessed each time a user logs onto the network. Your network may benefit from exposing the contents of this share via a clustered file share resource.
By default the Application Server will look in [app-path]\server\data\web-print-hot-folder
for Web Print files.
This location is generally only available on one node in the cluster. To support Web Print in a cluster you will need to add
a Shared Folder on the Shared Storage in your cluster. This can be
done on the same disk that the spool files reside and the Print Provider point to.
To change this location you will need to use the Config Editor and modify the web-print.hot-folder
key.
Add a Shared Folder on the Shared Storage, an example would be
E:\web-print-hot-folder
and share it as \\clustername\web-print-hot-folder\
.
Log in to the PaperCut NG administration console, naviate to
→Modify web-print.hot-folder
to E:\web-print-hot-folder
Map your selected network drive on the Web Print Sandbox machine to \\clustername\web-print-hot-folder\
Add all relevant printer queues from \\clustername
to the Web Print Sandbox server.
© Copyright 1999-2011. PaperCut Software International Pty Ltd. All rights reserved.