Hello,
After a bit of grief (just finding an SGI with a compiler on it) I can see
what is causing the problem but the "solution" so far is not complete.
In order to know what "drawable" (OpenGL jargon) is being drawn in, the
Python widget gets passed to the C world and a "context" (more OpenGL
jargon) is created, and to do that you need a "visual" (more OpenGL
jargon). Then when you want to draw into that drawable you make a call to
glXMakeCurrent() with that drawable and context as arguments. The
glXMakeCurrent() man page says:
"BadMatch is generated if drawable was not created with the same X screen
and visual as ctx. It is also generated if drawable is None and ctx is not
None."
I checked explicitly and drawable and ctx (the context) are both not None.
And there is only one screen. So it looks like the drawable was not
created with the same visual as ctx. Like I said, the drawable comes from
the Python world and which visual it is created with is a bit beyond our
control. The visual in the C world is created as follows (this is in the
function new_gl_handler() in ccpnmr1.0/c/ccpnmr/global/gl_handler.c):
visual = glXChooseVisual(display, DefaultScreen(display), dblBuf);
where dblBuf is defined a bit above that line as:
static int dblBuf[] = {GLX_RGBA, GLX_DOUBLEBUFFER, None};
The first two arguments mean:
GLX_RGBA: If present, only TrueColor and DirectColor visuals are
considered. Otherwise, only PseudoColor and StaticColor visuals are
considered.
GLX_DOUBLEBUFFER: If present, only double-buffered visuals are
considered. Otherwise, only single-buffered visuals are considered.
If I use dblBuf as is, or if I remove either but not both of those
arguments, then I get the BadMatch problem. I am not too bothered about
GLX_RGBA but I am about GLX_DOUBLEBUFFER. If I remove both, so have:
static int dblBuf[] = {None};
then I do not get the BadMatch problem. So I end up being able to display
contours. Only the whole thing does not really work properly: the
background is black instead of white, the crosshair does not get
refreshed (so the xor mode is not working), etc. (This is probably
because the Python code pretty much assumes double buffering is working.)
So this is not sorted yet.
Is there double buffering on these oldish SGIs? I thought there was, but
perhaps not.
Wayne
On Tue, 26 Oct 2004, Borlan Pan wrote:
> Unfortunately, neither GL_FALSE and GL_TRUE help.
>
> Borlan
>
> Wayne Boucher wrote:
>
> >Hello,
> >
> >I've done a bit of a trawl on google and as usual the question appears a
> >few times but not the answer. In particular already back in 1999 someone
> >had this problem with exactly the same note about it working on another
> >display:
> >
> >http://oss.sgi.com/projects/performer/mail/info-performer/perf-99-08/0000.html
> >
> >Someone in 2002 also had this kind problem trying to use another display
> >(so even worse than you are having, but the "X Error of failed request"
> >was different):
> >
> >http://oss.sgi.com/projects/performer/mail/info-performer/perf-02-01/0004.html
> >
> >and said they had tried xhost to sort this out but it did not.
> >
> >Now recently we changed one of the parameters in one of the first OpenGL
> >calls because it was causing the non-drawing of contours on Linux boxes
> >using native Nvidia OpenGL drivers. You could try changing this back to
> >see what happens. So in ccpnmr1.0/c/ccpnmr/global/gl_handler.c in the
> >function new_gl_handler() there is a line:
> >
> > context = glXCreateContext(display, visual, None, GL_FALSE);
> >
> >and you could change this back to:
> >
> > context = glXCreateContext(display, visual, None, GL_TRUE);
> >
> >(it's commented out in the text above the current version). Then type
> >"make" and "cd ../analysis" and "make" and try running Analysis again.
> >
> >If that works then we can try to put both variants in (somehow). (My
> >guess is that it will not solve it but you never know.)
> >
> >Wayne
> >
> >
>
|