This is generally not recommended, but there's no hard and fast rule.
It all depends on how large your farm is and how fast the supervisor h/w is. If you're running less than 10 workers and usually run jobs that are comprised of frames or tasks that take at least several minutes to run, and don't output much log data, then you might get away with this.
If your farm is larger, or you run a lot of composites or other types of jobs where the frames run on the order of 10s or less, or your jobs simply spew a lot of log data, then you may find that supervisor performance will suffer.
The first symptom of degraded performance will be if you have a lot of running subjobs that have no work assigned to them. This will be visible in the "subjob timeline" pane in the QubeGUI. If you see a lot of sections in the horizontal graphs that have long skinny sections between the fatter sections, this means that the subjobs are running, but don't have a frame to process (the skinny section in the middle is the subjob itself, and the fatter sections around it are the individual frames that the subjob is working on).
In any case, I would strongly recommend that you configure your farm so that the supervisor processes don't have to handle the job log data. See the posting on this forum "Writing job logs directly to a network filesystem"
http://www.pipelinefx.com/forum/index.php?topic=1137.0 Setup the "shared location" mentioned in that thread to one of the fast external filesystems that this machine is serving out.
This way, it's only the file server portion of the machine that is handling the log data, and not the Qube supervisor processes themselves.