Discussion:
lpstat -v , very slow with over 1200 printers , local printer list
Matt Garrett
2012-03-01 08:50:37 UTC
Permalink
Folks

Been using CUPS for years and works very well but over the last 6 months
our CUPS server has grown from about 400 printers to 1200+

Due to mergers of groups / departments.

Is there any way to speed up lpstat ?
I am gussing the answer is no.

Or is there a way for users to have a there own printers.conf file
and only to use that for that user.

Basicly each group only needs 4 or 5 printers

FYI
Hardware IBM X3550 , RAM 16GB
OS RedHat 5.3 (Tikanga)
Clients approx 3000 machines RedHat 5.3

Matt
Michael R Sweet
2012-03-01 15:22:40 UTC
Permalink
1200 printers should not be too slow - how long does the following command take:

time lpstat -p >/dev/null

Sent from my iPhone
Post by Matt Garrett
Folks
Been using CUPS for years and works very well but over the last 6 months
our CUPS server has grown from about 400 printers to 1200+
Due to mergers of groups / departments.
Is there any way to speed up lpstat ?
I am gussing the answer is no.
Or is there a way for users to have a there own printers.conf file
and only to use that for that user.
Basicly each group only needs 4 or 5 printers
FYI
Hardware IBM X3550 , RAM 16GB
OS RedHat 5.3 (Tikanga)
Clients approx 3000 machines RedHat 5.3
Matt
_______________________________________________
cups mailing list
http://lists.easysw.com/mailman/listinfo/cups
Johannes Meixner
2012-03-02 09:55:19 UTC
Permalink
Hello,
lpstat -v , very slow with over 1200 printers , local printer list
...
Been using CUPS for years and works very well but over the last 6 months
our CUPS server has grown from about 400 printers to 1200+
Only a guess:

Some time ago I solved a customer issue with "slow CUPS"
which was caused not by many print queues but by zillions
of job-control files for completed print jobs which cups
keeps in /var/spool/cups/c-<job-number> files and reads
into main memory.

See "MaxJobs" in
http://www.cups.org/documentation.php/doc-1.5/ref-cupsd-conf.html
-----------------------------------------------------------------------
The MaxJobs directive controls the maximum number of jobs that
are kept in memory. Once the number of jobs reaches the limit,
the oldest completed job is automatically purged
...
The default setting is 500.
-----------------------------------------------------------------------

I.e. in your case the number of jobs that are kept in memory has grown
from about 400 * 500 = 200000 to more than 1200 * 500 = 600000.
Perhaps in your case more than 600000 jobs kept in memory is too much?


Kind Regards
Johannes Meixner
--
SUSE LINUX Products GmbH -- Maxfeldstrasse 5 -- 90409 Nuernberg -- Germany
HRB 16746 (AG Nuernberg) GF: Jeff Hawn, Jennifer Guild, Felix Imendoerffer
Michael Sweet
2012-03-02 16:03:25 UTC
Permalink
Johannes,
Post by Johannes Meixner
...
Some time ago I solved a customer issue with "slow CUPS"
which was caused not by many print queues but by zillions
of job-control files for completed print jobs which cups
keeps in /var/spool/cups/c-<job-number> files and reads
into main memory.
FWIW, the MaxJobs setting has been 500 for a long long time now; we haven't seen the "too much job history" problem in a while, and recent (>= 1.4) CUPS unloads the job history from memory when unused to help minimize total memory usage. And while we *do* have a bug tracking some performance improvements in this area (STR #2913) that would not affect listing of printers which are always in memory and do not require loading of job history data.

_________________________________________________________
Michael Sweet, Senior Printing System Engineer, PWG Chair
Johannes Meixner
2012-03-06 13:41:23 UTC
Permalink
Hello Michael,
Post by Michael Sweet
Post by Johannes Meixner
...
Some time ago I solved a customer issue with "slow CUPS"
which was caused not by many print queues but by zillions
of job-control files for completed print jobs which cups
keeps in /var/spool/cups/c-<job-number> files and reads
into main memory.
FWIW, the MaxJobs setting has been 500 for a long long time now;
we haven't seen the "too much job history" problem in a while,
and recent (>= 1.4) CUPS unloads the job history from memory
when unused to help minimize total memory usage. And while
we *do* have a bug tracking some performance improvements
in this area (STR #2913) that would not affect listing of
printers which are always in memory and do not require
loading of job history data.
I do not remember what exactly was "slow" for this customer.
But I do remember that he had at most CUPS 1.3.x with MaxJobs=0
(he had MaxJobs=0 intentionally but I didn't understand why)
and that whatever was slow got normal speed after he removed
his zillions of job-control files (it was many Thousands).

On first glance everything looked normal on his CUPS server - except
his zillions of old job-control files. Only by gut feeling I suggested
to remove them - and voila!


Kind Regards
Johannes Meixner
--
SUSE LINUX Products GmbH -- Maxfeldstrasse 5 -- 90409 Nuernberg -- Germany
HRB 16746 (AG Nuernberg) GF: Jeff Hawn, Jennifer Guild, Felix Imendoerffer
Johannes Meixner
2012-03-06 11:26:40 UTC
Permalink
Hello,
Post by Johannes Meixner
lpstat -v , very slow with over 1200 printers , local printer list
...
Been using CUPS for years and works very well but over the last 6 months
our CUPS server has grown from about 400 printers to 1200+
...
Post by Johannes Meixner
See "MaxJobs" in
http://www.cups.org/documentation.php/doc-1.5/ref-cupsd-conf.html
-----------------------------------------------------------------------
The MaxJobs directive controls the maximum number of jobs that
are kept in memory. Once the number of jobs reaches the limit,
the oldest completed job is automatically purged
...
The default setting is 500.
-----------------------------------------------------------------------
I.e. in your case the number of jobs that are kept in memory has grown
from about 400 * 500 = 200000 to more than 1200 * 500 = 600000.
Perhaps in your case more than 600000 jobs kept in memory is too much?
My calculation is wrong because I confused MaxJobs with MaxJobsPerPrinter.

In your case (1200 print queues) the MaxJobs default of 500 jobs
could be too low because new jobs will be rejected if 500 jobs
are still pending or active (so that you cannot print simultaneously
on all your 1200 print queues).

Therefore you might have set "MaxJobs 0" but then the MaxJobsPerPrinter
default setting (MaxJobsPerPrinter 0) results that the number of jobs
that are kept in memory can grow unlimited.

At least this is how I understand the documentation.


Kind Regards
Johannes Meixner
--
SUSE LINUX Products GmbH -- Maxfeldstrasse 5 -- 90409 Nuernberg -- Germany
HRB 16746 (AG Nuernberg) GF: Jeff Hawn, Jennifer Guild, Felix Imendoerffer
Loading...