PipelineFX Forum
Qube! => Installation and Configuration => Topic started by: stevemagg on June 07, 2007, 06:38:31 PM
-
Trying to pass user environment to qube submission. In this case, PATH info. Seems qube is running in bash...
Anyone passing tcsh env variables to qube?
I understand there is a way to modify jobTypes to accommodate this?
Thanks.
Steve
-
Hey Steve,
There really isn't a way to modify this, however you might be able to export your environment from tcsh to bash. The user's environment is loaded from by setting up a specialized .bashrc. I'll do a little bit of research to get you that information. As far as allowing you to modify your shell, we've added it to our feature request database.
Thanks,
Anthony
-
Trying to pass user environment to qube submission. In this case, PATH info. Seems qube is running in bash...
Anyone passing tcsh env variables to qube?
I understand there is a way to modify jobTypes to accommodate this?
Thanks.
Steve
You might also try adding the "export_environment" job flag on your client qb.conf:
For example:
client_job_flags = auto_mount,export_environment
If you prefer to use the Configuration GUI, set the "Export Environment" switch under the Client Settings tab.
-
export_environment doesn't seem do it.
In our environment, when a user starts a new shell it sources .tcshrc which sets a few global and a few user specific variables, and then runs a python script that spits out the rest of the environment.
We need to both source ~/.tcshrc and run that python script.
The python script (and all modules it relies on) are tcsh specific.
If Qube is meant to use the user's environment, how do you hope to accomplish that without using the user's shell of choice and its related configuration files?
Thanks
Dado
-
export_environment doesn't seem do it.
In our environment, when a user starts a new shell it sources .tcshrc which sets a few global and a few user specific variables, and then runs a python script that spits out the rest of the environment.
We need to both source ~/.tcshrc and run that python script.
The python script (and all modules it relies on) are tcsh specific.
If Qube is meant to use the user's environment, how do you hope to accomplish that without using the user's shell of choice and its related configuration files?
Thanks
Dado
The idea behind the export_environment flag is to add to a job submission the user's environment variables. When the job executes, it will set the "exported" environment variables.
You can test this by running
env
Then submitting a job:
qbsub --requirements host.os=linux --flags export_environment env
When you check the log output and compare it to the env command, you should see similar variables.
This, of course won't work if you depend upon shell variables, which are different from environment variables.
-
Hi!
I'm running into a similar problem. We're using tcsh shells here, so there is nothing setup for bash. Simply using export_environment isn't enough since the path is not being exported as well, and our infrastructure depends on the path being correct.
I was wondering if a solution has been found, or if I'm going to need to reset the path as part of my command (or rework the infrastructure).
Thanks,
Zameer
-
The Qube! libraries use "su - <username> env" to query the environment from the system for a particular user.
If your user's default shell isn't set properly using "chsh" or "ypchsh" then you'll probably find that the worker is setting the shell imappropriately. Just to make sure that the simple things havn't been over looked, if you run the above command as "root" do you see the expected environment?
A.
-
Here's the output from this command
[root@lips ~]$ su - todd env
/usr/bin/env: /usr/bin/env: cannot execute binary file
As well currently our environment is only initialized for login
shells, unless you specify a flag in su (-l specifically) then you
won't receive a login shell.
-
Hey Zameer,
Actually the "-" sign is the same as "-l" as well as "--login". Is there a reason why the "env" command appears to be broken?
A.
-
Running env by itself works fine and is therefore not broken, perhaps the syntax of the command needs work.
The correct syntax for this command under 'su' version 5.93 is
su - todd -c env
Running that command properly executes the tcsh startup scripts, and contains SHELL=/bin/tcsh. This works on both our Linux and Mac environments.
I'm currently executing tcsh -c "command" as my command to qbsub, but we are looking for a more permanent solution.
Zameer
-
Hey Zameer,
Just for our information, what version of Linux are you running?
Actually I mistyped the command we use... the exact command is:
su - <user> -c "\echo __WIPE__ && \echo __QBSTART__ && \env"
Does the su - todd -c env work properly?
A.
-
Hiya,
We're using Suse 10.1
The su - <user> -c "\echo __WIPE__ && \echo __QBSTART__ && \env" printed our environment after it executed the tcsh init scripts and your two echo's.
The su - todd -c env behaves pretty much the same way, except it didn't print off the echo's
Zameer
-
Hey Zameer,
If you could send us the printout of the "env" command that would help us alot. If you are uneasy with posting the output, please send it to our support email so that we can handle the case directly.
Thanks,
Anthony