Page 1 of 2

GLSL noob problem...

Posted: Mon May 17, 2010 5:01 pm
by Zylyx_
Hi all!

Well, my journey into OpenGL is progressing. I've made a few projects using the FFP functionality from standard OpenGL 1.1 specification implementation. Now i'm moving onto using OpenGL 2.0 and GLSL.

I'm using GLEW and GLUT to test a basic shader manager class. However, I'm getting an access violation error whenever I try to run the program. I think the problem lies with the shader and program objects. Anyways, here is the code:

SHMAN.h

Code: Select all

//GLSL shader manager interface

#ifndef _SHMAN_H_
#define _SHMAN_H_

//system includes
#include <gl\glew.h>		
#include <gl\gl.h>				
#include <iostream>

#define MAX_INFO_LOG_SIZE 2048

//Shader manager class
class SHMAN
{
public:

	//default constructor, get the shaders
	SHMAN(const GLchar * vertShader, const GLchar * fragShader)
	{
		vertShaderSrc = vertShader;
		fragShaderSrc = fragShader;
	}

	//SHMAN destructor
	~SHMAN()
	{
		CleanUp();
	}

	//Shader manager core functions
	void CreateSHMANObject();
	void LoadShaders();
	void CompileShaders();
	void AttachToProgObj();
	void LinkProgObj();
	void ValidateProgObj();
	void UseProjObj();
	void CleanUp(); //detach shaders from program objects and delete the shader objects
	
private:
	GLint success;
	GLint currentGPUProg;
	const GLchar * vertShaderSrc;
	const GLchar * fragShaderSrc;
	GLchar * log;

	GLuint shmanProg;
	GLuint shmanVertsh;
	GLuint shmanFragsh;
};

#endif
and the interface implemntation:

Code: Select all


#include "shman.h"

using namespace std;

void SHMAN::CreateSHMANObject()
{
	shmanVertsh = glCreateShader(GL_VERTEX_SHADER); //create new vertex shader object
	shmanFragsh = glCreateShader(GL_FRAGMENT_SHADER); //create new fragment shader object
}

void SHMAN::LoadShaders()
{
	//load vertex shader
	const GLchar *strPtr[1];
	strPtr[0] = vertShaderSrc;
	glShaderSource(shmanVertsh, 1, strPtr, NULL);
	//load fragment shader
	strPtr[0] = vertShaderSrc;
	glShaderSource(shmanFragsh, 1, strPtr, NULL);
}

void SHMAN::CompileShaders()
{
	//compile the vertex shader and and get compilation status
	glCompileShader(shmanVertsh);
	glGetShaderiv(shmanVertsh, GL_COMPILE_STATUS, &success);

	if(!success)
	{
		glGetShaderInfoLog(shmanVertsh, MAX_INFO_LOG_SIZE, NULL, log);
		cout << "\nVertex shader compilation error!" << endl;
		cout << "\nPlease see compilation log details below: " << endl;
		cout << "\n" << log << endl;
	}

	//compile the fragment shader and get compilation status
	glCompileShader(shmanFragsh);
	glGetShaderiv(shmanFragsh, GL_COMPILE_STATUS, &success);

	if(!success)
	{
		glGetShaderInfoLog(shmanFragsh, MAX_INFO_LOG_SIZE, NULL, log);
		cout << "\nFragment shader compilation error!" << endl;
		cout << "\nPlease see compilation log details below: " << endl;
		cout << "\n" << log << endl;
	}
}

void SHMAN::AttachToProgObj()
{
	shmanProg = glCreateProgram();
	glAttachShader(shmanProg, shmanVertsh);
	glAttachShader(shmanProg, shmanFragsh);
}

void SHMAN::LinkProgObj()
{
	//link the program object
	glLinkProgram(shmanProg);
	glGetProgramiv(shmanProg, GL_LINK_STATUS, &success);

	if(!success)
	{
		glGetShaderInfoLog(shmanProg, MAX_INFO_LOG_SIZE, NULL, log);
		cout << "\nGPU program object link error!" << endl;
		cout << "\nPlease see compilation log details below: " << endl;
		cout << "\n" << log << endl;
	}
}

void SHMAN::ValidateProgObj()
{
	//validate the program before running on GPU
	glValidateProgram(shmanProg);
	glGetProgramiv(shmanProg, GL_VALIDATE_STATUS, &success);

	if(!success)
	{
		glGetShaderInfoLog(shmanProg, MAX_INFO_LOG_SIZE, NULL, log);
		cout << "\nGPU program validation error!" << endl;
		cout << "\nPlease see compilation log details below: " << endl;
		cout << "\n" << log << endl;
	}
}

void SHMAN::UseProjObj()
{
	glUseProgram(shmanProg);

	//disable if un-needed
	//currentGPUProg = glGetIntegerv(GL_CURRENT_PROGRAM, currentGPUProg);
	//cout << "\nCurrent GPU program object handle: " << currentGPUProg << " \n" << endl;
}

void SHMAN::CleanUp()
{
	glDetachShader(shmanProg, shmanVertsh);
	glDetachShader(shmanProg, shmanFragsh);
	glDeleteProgram(shmanProg);
}
It would be really awsome if any one of you could perhaps look at my code and explain to me what I'm doing wrong. Thank you in advance!

Posted: Mon May 17, 2010 5:33 pm
by mh
You need to allocate memory and then memcpy in your constructor otherwise you risk what is being pointed to going out of scope before it's used.

Posted: Mon May 17, 2010 7:02 pm
by Zylyx_
Yeah, i tried that, but I still get the same problem. Here is what I added to the constructor:

Code: Select all

//default constructor, get the shaders
	SHMAN(const GLchar * vertShader, const GLchar * fragShader)
	{
		vertShaderSrc = (GLchar*) malloc(sizeof(vertShader));
		memcpy(&vertShaderSrc, &vertShader, *vertShaderSrc);

		fragShaderSrc = (GLchar*) malloc(sizeof(fragShader));
		memcpy(&fragShaderSrc, &fragShader, *fragShaderSrc);
	}

Posted: Mon May 17, 2010 9:44 pm
by r00k
shouldnt memcpy be (dest, src, size) ?

memcpy(&fragShaderSrc, &fragShader, sizeof(fragShaderSrc));
of course now im confused which is your source, or destination... :P

Posted: Mon May 17, 2010 10:00 pm
by Spike
vertShaderSrc = (GLchar*) malloc(strlen(vertShader)+1);
memcpy(vertShaderSrc, vertShader, strlen(vertShader)+1);

is what you want.

your code allocates 4 bytes then copies between 0 and 255 bytes onto the stack corrupting it.

your vertShader and vertShaderSrc variables are pointers, you don't wanna take the address of the pointer here... that's absurd.

or you could just use:
vertShaderSrc = strdup(vertShader);
but that would make things too easy.
Alternatively, as vertShader is a pointer to a string, if that's always going to be a string immediate that is passed in, then you don't even need the strdup, but yeah, assumptions are assumptions.

Posted: Mon May 17, 2010 11:11 pm
by mh
Or alternatively front-load everything into your constructor; compile, link, etc in there. It may not give you all of the flexibility you might want, but consider if you need that flexibility. If you're writing a more general purpose shader management tool maybe you do, but otherwise I would argue that maybe you don't.

Posted: Tue May 18, 2010 5:31 pm
by Zylyx_
NO luck. I resorted to using my graphic programming lecturer's code, but i still get the same error.

Here is all of his code (and it does work, because we have to use it for coursework next semester):

GLSLshader.h

Code: Select all

 #ifndef _GLSLSHADER_H
#define _GLSLSHADER_H

#include <GL/glew.h>
#include <string>



class GLSLshader
{
public:	
	//constructor creates
	GLSLshader(const std::string &filename, GLenum shaderType );
	char	*ReadShaderFromFile(const std::string &filename);
	bool	CompileShader();
	
	//the handle used to identify the gl shader object
	GLuint ShaderID;

private:
	bool Compiled;
  
};


#endif  
GLSLshader.cpp

Code: Select all

  #include "GLSLshader.h"
#include <string>
#include <fstream>
#include <iostream>

using namespace std;


GLSLshader::GLSLshader(const std::string &filename, GLenum shaderType )
{	
	Compiled = false;
	//initialise our shader to the correct shader type we require.
	ShaderID = glCreateShader(shaderType);
	const char *source = ReadShaderFromFile(filename);
	
	//read just a single shader from the string we created from the file.
	glShaderSource(ShaderID, 1, static_cast<const GLchar**>(&source), NULL);

	//compile
	Compiled = CompileShader();
	//insert error checking

	delete [] source;

}

char	*GLSLshader::ReadShaderFromFile(const std::string &filename)
{
	//file reader taken from opengl Blue book.
	ifstream temp(filename.c_str());
	int count = 0;
	char *buf;
  
	temp.seekg(0, ios::end);
	count = temp.tellg();
  
	buf = new char[count + 1];
	memset(buf,0,count);
	temp.seekg(0, ios::beg);
	temp.read(buf, count);
	buf[count] = 0;
	temp.close();
  
	return buf;
}

bool	GLSLshader::CompileShader()
{
	glCompileShader(ShaderID);

	int ShaderCompileSuccess;
	glGetObjectParameterivARB(ShaderID, GL_OBJECT_COMPILE_STATUS_ARB, &ShaderCompileSuccess); 
	
	if(!ShaderCompileSuccess)
	{
		return false;	
	}
	return true;
} 
GLSLProgram.h

Code: Select all

  #ifndef _GLSLPROGRAM_H
#define _GLSLPROGRAM_H

#include <GL/glew.h>
#include <string>

class GLSLprogram
{
public:
	GLSLprogram();
	void SetupProgram(const std::string &vertexshaderfilename,const std::string &fragmentshaderfilename);

	GLuint ProgramID;
private:
		
  
};



#endif 
GLSLProgram.cpp

Code: Select all

 #include "GLSLprogram.h"
#include "GLSLshader.h"
#include <GL/glew.h>
#include <string>
#include <fstream>
#include <iostream>

GLSLprogram::GLSLprogram()
{

}
void GLSLprogram::SetupProgram(const std::string &vertexshaderfilename, const std::string &fragmentshaderfilename)
{
	int success;

	GLuint test = glCreateProgram();
	ProgramID = glCreateProgram();
	static GLSLshader VertShade(vertexshaderfilename, GL_VERTEX_SHADER);
	static GLSLshader FragShade (fragmentshaderfilename, GL_FRAGMENT_SHADER);
	glAttachShader(ProgramID, VertShade.ShaderID);
	glAttachShader(ProgramID, FragShade.ShaderID);
	glLinkProgram(ProgramID);
	glGetProgramiv(ProgramID, GL_LINK_STATUS, &success);
	
	if(!success)
	{
		//error check

	}

	glValidateProgram(ProgramID);


}
  
Basically, I'm still getting the runtime access violation errors associated with GLuint's representing the shader and gpu program objects. The error happens whenever the shader or program object identifier is assigned to glCreateShader() and glCreateProgram().

And yes, I'm calling glewInit() in main().

Posted: Tue May 18, 2010 5:32 pm
by Zylyx_
One big factor that might be the root of the problem is that I'm using Visual Studio 2008...

Posted: Tue May 18, 2010 5:44 pm
by Zylyx_
Yup, MSVC2008 might well be the culprit.

I made a very simple test application just to see if my assignment of glCreateShader() to a GLenum variable would cause the same access violation run time error, and it did.

Code: Select all

//system includes
#include <gl\glew.h>		
#include <gl\gl.h>		
#include <gl\glut.h>
#include <windows.h>		
#include <iostream>

GLenum tmp;

void test()
{
	tmp = glCreateShader(GL_VERTEX_SHADER);
}

int main(int argc, char* argv[])
{
	glewInit();
	glutInit(&argc, argv);

	test();

	system("PAUSE");

	return 0;
}
Anyone got any suggestions?

Posted: Tue May 18, 2010 7:50 pm
by mh
Switch your character set from Unicode to Multibyte in Project Properties, and that's a pint of beer you owe me. :D

Posted: Tue May 18, 2010 8:02 pm
by Zylyx_
Update:

That's the first thing I do whenever I start a new project...my project has been using the muli-byte character set all this time, lol.

Anyways, I even installed VS2005 a few minutes ago, and I STILL GET THE SAME PROBLEM!!!!!!!!! ARGHHHHHHH!!!!!

However, on a more positive note, I did get a very complex GLSL shader mangment example to work, but it's pretty intense...
see here: http://www.opengl.org/sdk/docs/tutorial ... oading.php

I'm just so annoyed that I have to spend so much time setting up crap like this, when I should be focusing on actually learning GLSL and writting shaders.

Posted: Tue May 18, 2010 8:25 pm
by mh
OK then, run it in the debugger. Set a breakpoint here:

Code: Select all

test();
And here:

Code: Select all

system("PAUSE");
And here:

Code: Select all

return 0;
What happens? And which line does it happen on?

Step 2. Change this:

Code: Select all

void test()
{
   tmp = glCreateShader(GL_VERTEX_SHADER);
}
to this:

Code: Select all

void test()
{
   tmp = glCreateShader(GL_VERTEX_SHADER);
   tmp = tmp;
}
Set a breakpoint on each line here. When it breaks on the first examine the address of glCreateShader. It is NULL or non-NULL? Run to the next breakpoint. Does it explode? If not, examine the value of tmp. You should have a handle or whatever in there if it worked.

Step 3. Try replacing the glCreateShader call with something simpler; backtracking like this can be useful to isolate problems. A call to glFinish should be enough. Does it explode?

Posted: Fri May 21, 2010 9:39 pm
by Zylyx_
Thanx for the ongoing help. I'm away from my PC till monday, but when I get back, I'll update you on my progress asap! Many Thanx!

Posted: Mon May 24, 2010 12:25 pm
by Zylyx_
Solved:

Image

I cant believe how much crap I had to go through to get this working. I'm using the provided game framework (not mine, but my lecturers), which we will have to use to make a game for the next two semesters when I get back to university (coz I'm on summer holiday now).

Here is the shader code:

Code: Select all

//Basic vertex shader

void main(void)
{
   gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

Code: Select all

//basic fragment shader

void main(void)
{
   gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
There is still a problem, and that is that I can only run the debug/release versions of the program in visual studio, but not on their own. This will require another week of investigation, lol.

I mean, common, the Khronos group should publish the "Purple Book" aka "The guide to setting up OpenGL in the 21st Century: A Modern Approach". We can all help write it!

Posted: Mon May 24, 2010 6:26 pm
by Sajt
Make sure that if you're going to mix shader rendering with fixed-function rendering (i.e. render some objects using both methods in the same scene) you need to replace

Code: Select all

gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
with

Code: Select all

gl_Position = ftransform();
or you will get some ugly depth buffer inconsistencies. But you probably already knew that.