----- Original Message -----
Hi Phil,
On Mo 11 Apr 2011 13:25:55 CEST "--[ UxBoD ]--" wrote:
----- Original Message -----
Hi Phil,
On Mo 11 Apr 2011 12:07:57 CEST "--[ UxBoD ]--" wrote:
Yep that is my interpretation of the process as-well, Mike. One thing I have noticed is that if you print the resultant PDF the file is left in the spool directory without being deleted; yet if you cancel from the x2go-print dialogue box the file is removed.
is that x2goclient or PyHoca-GUI/Python X2go? With PyHoca-GUI this should not happen, otherwise it is a bug I have missed so far...
Greets, Mike
x2goclient 3.01.18.
we have an unofficial bugtracker that we (the devs and the community) still have to discuss about. However, maybe you place such stuff in there already. In case of dropping the software again (Horde) we will surely migrate the already placed-in tickets.
The chance of forgetting such tiny issues without accounting is otherwise potentially given...
Greets, Mike
Mike,
I think I may have worked out why some print jobs are failing and it would appear to be an x2go issue and not SSHFS/WAN.
What is happening is that the Perl script, x2goprint, moves the PDF from /var/spool/x2goprint into the users mounted spool directory and then creates the .ready file. The x2goclient looks for any new files in that directory and if it sees the .ready file it pops up the x2go print dialogue window. Now, looking at the code it appears to remove the file as-well; in 3.01-18 the pertinent code is in onmainwindow_part4.cpp line 200.
I am thinking this is a timing issue with respect to the .ready file being written and the slot which monitors the local spool directory firing off an event.
Thanks, Phil
----- Original Message -----
----- Original Message -----
Hi Phil,
On Mo 11 Apr 2011 13:25:55 CEST "--[ UxBoD ]--" wrote:
----- Original Message -----
Hi Phil,
On Mo 11 Apr 2011 12:07:57 CEST "--[ UxBoD ]--" wrote:
Yep that is my interpretation of the process as-well, Mike. One thing I have noticed is that if you print the resultant PDF the file is left in the spool directory without being deleted; yet if you cancel from the x2go-print dialogue box the file is removed.
is that x2goclient or PyHoca-GUI/Python X2go? With PyHoca-GUI this should not happen, otherwise it is a bug I have missed so far...
Greets, Mike
x2goclient 3.01.18.
we have an unofficial bugtracker that we (the devs and the community) still have to discuss about. However, maybe you place such stuff in there already. In case of dropping the software again (Horde) we will surely migrate the already placed-in tickets.
The chance of forgetting such tiny issues without accounting is otherwise potentially given...
Greets, Mike
Mike,
I think I may have worked out why some print jobs are failing and it would appear to be an x2go issue and not SSHFS/WAN.
What is happening is that the Perl script, x2goprint, moves the PDF from /var/spool/x2goprint into the users mounted spool directory and then creates the .ready file. The x2goclient looks for any new files in that directory and if it sees the .ready file it pops up the x2go print dialogue window. Now, looking at the code it appears to remove the file as-well; in 3.01-18 the pertinent code is in onmainwindow_part4.cpp line 200.
I am thinking this is a timing issue with respect to the .ready file being written and the slot which monitors the local spool directory firing off an event.
Just not sure how to solve it yet :(
Okay, in onmainwindow_part4 here is the bit of code which I believe is causing the issue:
if ( !file.open ( QIODevice::ReadOnly | QIODevice::Text ) ) continue; bool startProc=false; QString fname,title; if ( !file.atEnd() )
My theory is that a lock still exists on the .ready file for which the file.open() cannot be called and therefore the code continues and ultimately removes the .ready file; taking into account the loop I put into x2goprint the script sees that the file has disappeared and writes it again. This is why I believe we see a different count number each time.
Thanks, Phil
----- Original Message -----
----- Original Message -----
----- Original Message -----
Hi Phil,
On Mo 11 Apr 2011 13:25:55 CEST "--[ UxBoD ]--" wrote:
----- Original Message -----
Hi Phil,
On Mo 11 Apr 2011 12:07:57 CEST "--[ UxBoD ]--" wrote:
Yep that is my interpretation of the process as-well, Mike. One thing I have noticed is that if you print the resultant PDF the file is left in the spool directory without being deleted; yet if you cancel from the x2go-print dialogue box the file is removed.
is that x2goclient or PyHoca-GUI/Python X2go? With PyHoca-GUI this should not happen, otherwise it is a bug I have missed so far...
Greets, Mike
x2goclient 3.01.18.
we have an unofficial bugtracker that we (the devs and the community) still have to discuss about. However, maybe you place such stuff in there already. In case of dropping the software again (Horde) we will surely migrate the already placed-in tickets.
The chance of forgetting such tiny issues without accounting is otherwise potentially given...
Greets, Mike
Mike,
I think I may have worked out why some print jobs are failing and it would appear to be an x2go issue and not SSHFS/WAN.
What is happening is that the Perl script, x2goprint, moves the PDF from /var/spool/x2goprint into the users mounted spool directory and then creates the .ready file. The x2goclient looks for any new files in that directory and if it sees the .ready file it pops up the x2go print dialogue window. Now, looking at the code it appears to remove the file as-well; in 3.01-18 the pertinent code is in onmainwindow_part4.cpp line 200.
I am thinking this is a timing issue with respect to the .ready file being written and the slot which monitors the local spool directory firing off an event.
Just not sure how to solve it yet :(
Okay, in onmainwindow_part4 here is the bit of code which I believe is causing the issue:
if ( !file.open ( QIODevice::ReadOnly | QIODevice::Text ) ) continue; bool startProc=false; QString fname,title; if ( !file.atEnd() )
My theory is that a lock still exists on the .ready file for which the file.open() cannot be called and therefore the code continues and ultimately removes the .ready file; taking into account the loop I put into x2goprint the script sees that the file has disappeared and writes it again. This is why I believe we see a different count number each time.
Any thoughts on how to resolve this one ?
Reading the Qt document I see:
"Returns true if the current read and write position is at the end of the device (i.e. there is no more data available for reading on the device); otherwise returns false. For some devices, atEnd() can return true even though there is more data to read. This special case only applies to devices that generate data in direct response to you calling read() (e.g., /dev or /proc files on Unix and Mac OS X, or console input / stdin on all platforms)."
I am guessing that when the perl script opens the file for writing the x2go client is so quick and sees that it is at the end straight away; drops past the if loop and then removes the .ready file.
if ( !file.atEnd() )
{
QByteArray line = file.readLine();
QString fn ( line );
fn.replace ( "\n","" );
fname=fn;
if ( !file.atEnd() )
{
line = file.readLine();
title=line;
title.replace ( "\n","" );
}
startProc=true;
}
Thanks, Phil
Hi Phil,
On Di 12 Apr 2011 17:11:25 CEST "--[ UxBoD ]--" wrote:
What is happening is that the Perl script, x2goprint, moves the PDF
from /var/spool/x2goprint into the users mounted spool directory and
then creates the .ready file. The x2goclient looks for any new files
in that directory and if it sees the .ready file it pops up the x2go
print dialogue window. Now, looking at the code it appears to remove
the file as-well; in 3.01-18 the pertinent code is in
onmainwindow_part4.cpp line 200.
in the pending patch for x2goprint I have changed the mechanism a
little. The reason is that we have to presume in general that root
cannot read-write to the user's home (e.g. if homes are on NFS3 with
root-squashing, NFS4+Krb5, AFS+Krb5 etc.). So what I do is:
o x2goprint scripts runs as root (sudo from x2goprint user) o copy print jobs to /tmp/spool_<user>/tmp o chown the files to <user> o su - to the user and move the print job directly into /tmp/spool_<user>/<session_id> ... instead of taking the detour via the home dir...
http://code.x2go.org/gitweb?p=x2goserver.git;a=commitdiff;h=16cdb70f5bbd1298...
I am thinking this is a timing issue with respect to the .ready file
being written and the slot which monitors the local spool directory
firing off an event.
With PyHoca-GUI I have this strategy:
o one printqueue thread is waiting for print jobs
o if a job appears another thread is started that handles the print action
(pdfview, pdfsave, print, ...)
o the print action immediately creates a ,,local'' copy of the print job (to
make sure it does not vanish while processing it)
o whilst the first thread is counting till 60(secs) and then deletes the
original set of job files
o the print actions however have different ways of handling their
local copy:
o on Windows after having processed the print action task (e.g.
opening a pdfviewer) there is a funtion that keeps checking if the file can be deleted. While it is locked by the file system, it will continue the deletion attempts till one of them is successful o on Linux I delete the files directly after I am sure that the job is processed (e.g. waiting long enough, testing for a print result, ...)
I guess that the creation of a local copy of a print job might be a
solution...
Greets, Mike
--
DAS-NETZWERKTEAM mike gabriel, dorfstr. 27, 24245 barmissen fon: +49 (4302) 281418, fax: +49 (4302) 281419
GnuPG Key ID 0xB588399B mail: mike.gabriel@das-netzwerkteam.de, http://das-netzwerkteam.de
freeBusy: https://mail.das-netzwerkteam.de/freebusy/m.gabriel%40das-netzwerkteam.de.xf...