Thursday, June 30, 2011

OpenGL Lighting

Here I will show you how to use opengl light within your opengl application.
You enable the depth, the lighting, and which light you are using.

To perform basic lighting, you will need to
* Enable lighting support by OpenGL : When lighting is enabled, it automatically generates colors for your models
* Enable a light source :
* Provide information about the surface you will be lighting
a. Surface normals
b. Material properties

1.Enable depth Support
Firstly clear the depth buffer glClearDepth(1);
Secondly enable depth testing glEnable(GL_DEPTH_TEST);

2.Enable OpenGL lighting support
glEnable(GL_LIGHTING);

3.Enable the light ranging from GL_LIGHT0 -> GL_LIGHT4
glEnable(GL_LIGHT0); or glEnable(GL_LIGHT1)/glEnable(GL_LIGHT2) etc

Rest is pretty clear if you go through the code listed below

#include < GL/glut.h >
GLfloat angle = 0.0;

void drawPlane()
{
// Draw a red x-axis, a green y-axis, and a blue z-axis. Each of the
// axes are ten units long.
glBegin(GL_LINES);
glColor3f(1, 0, 0); glVertex3f(-10, 0, 0); glVertex3f(10, 0, 0);
glColor3f(0, 1, 0); glVertex3f(0, -10, 0); glVertex3f(0, 10, 0);
glColor3f(1, 1, 1); glVertex3f(0, 0, -10); glVertex3f(0, 0, 10);
glEnd();
}

void drawCube (void)
{
//Color will not work if this is not enabled
glEnable(GL_COLOR_MATERIAL);
glRotatef(angle, 1.0, 0.0, 0.0);
glRotatef(angle, 0.0, 1.0, 0.0);
glRotatef(angle, 0.0, 0.0, 1.0);
glColor3f(1.0, 0.0, 0.0);

glutSolidCube(2);
}

void init (void)
{
glEnable(GL_DEPTH_TEST);
glEnable (GL_LIGHTING);
glEnable (GL_LIGHT0);
}

void display (void)
{
glClearColor (0.0,0.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt (0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
drawPlane();
drawCube();
glutSwapBuffers();
angle ++;
}

void reshape (int w, int h)
{
glViewport (0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
gluPerspective (60, (GLfloat)w / (GLfloat)h, 1.0, 100.0);
glMatrixMode (GL_MODELVIEW);
}

int main (int argc, char **argv) {
glutInit (&argc, argv);
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH);
glutInitWindowSize (500, 500);
glutInitWindowPosition (100, 100);
glutCreateWindow ("OpenGL Lighting");
init ();
glutDisplayFunc (display);
glutIdleFunc (display);
glutReshapeFunc (reshape);
glutMainLoop ();
return 0;
}

OpenGL Translation/Rotation over C++ Example

#include < GL/glut.h >
float yRotationAngle = 0.0f;
float yLocation = 0.0f;

void display (void)
{
glClearColor(0.2f, 0.7f, 0.0f, 1.0f);

glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();

// Push everything 10 units back into the scene
glTranslatef(0.0f, 0.0f, -10.0f);

// Translate our object along the y axis
glTranslatef(0.0f, yLocation, 0.0f);


// Rotate our object around the y axis
glRotatef(yRotationAngle, 0.0f, 1.0f, 0.0f);

// Render the cube
glutWireCube(2.0f);

// Flush buffer to window
glFlush();


yRotationAngle += 0.01f;

//Start opposite rotation if the angle >360 or < -360 if (yRotationAngle > 360.0f)
yRotationAngle -= 360.0f;
}

void reshape (int width, int height)
{
glViewport(0, 0, (GLsizei)width, (GLsizei)height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60, (GLfloat)width / (GLfloat)height, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
}


int main (int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_SINGLE);
glutInitWindowSize (500, 500);
glutInitWindowPosition (100, 100);
glutCreateWindow ("OpenGL Rotation and Translation");
glutDisplayFunc(display);
glutIdleFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
}

Tuesday, June 28, 2011

Model Transformation in OpenGL

If you are new to transformations and OpenGl concepts then I would like you to run though my previous artile listed here
The following are the OpenGL functions for Model transformations:
glRotate()
glScale()
glTranslate()

It will be helpful if you understand Matrices/Vectors and basic Trignometry along the 4 Quadrants around a circle.
For our use here we will use an Identity Matrix.

In mathematical terms if we were to represent this following transformation then
glTranslatef( 4.0, 2.0, 0.0 );
glRotatef( 90.0, 0.0, 0.0, 1.0 );
glVertex3f( 1.0, 2.0, 3.0 );

would be represented as(1st Quad rule applied)
|1 0 0 4.0| |Cos90 -Sin90 0 0| |1.0|      |x|
|0 1 0 2.0| |Sin90 Cos90 0 0| |2.0|   = |y|
|0 0 0 0.0| |0 0 1 0| |3.0|      |z|
|0 0 0 0 | |0 0 0 1| |1.0|      |y| Transformed vertex



The most important thing to remember about Model transformation is that it effects the co-ordinate system.As a result all models appear to be transformed because they are drawn relative to the new co-ordinates.
Also the current co-ordinate system is represented by the current matrix.

The final position or layout of the object will depend on the order the functions are called and there by the co-ordinate system is re-aligned.

Example 1: Let us see how a cube/rectangle will render itself if we were to follow the transformation given in the order below.

1. glPushMatrix() : this will create a new matrix of co-ordinates as the original axes layout and any further changes/transformation will happen to the new Matrix on the stack while the original Matrix is saved.




2. glTranslatef( 4.0, 2.0, 0.0 )
: This moves the co-ordinate system origin to the new location





3. glScalef( 1.0, 0.5, 1.0 ) : The co-ordinate system for y-axis is scaled down by half




4. glRotatef( 45.0, 0.0, 0.0, 1.0 ) : Rotates the co-ordinate system 45 degrees around the z-axis



5. Any custom function to draw a rectangle will result in the model being drawn wrt to the new matrix co-ordinate system.



6. glPopMatrix() : After drawing we want to reset our co-ordinate system to the original layout



If you visualize(as shown above), any model drawn is transformed according to the co-ordinate system layout changes.

Monday, May 30, 2011

OpenGl-1

OpenGl is a huge area and very easily you can give it up considering that there are so many concepts to pickup.Beginners find it complex to understand the basics, which once understood, can make using this excellent API real fun and produce stunning graphic results.I feel it is always a good idea to keep theory to as little as possible and quickly start getting vizual results for beginners to feel that they are creating something and not get bogged down with theoritical concepts.I will provide the Iphone sources from the next post onwards but first the the dirty part ie.some theory :)

VIEWPORT
Is the the part where you can draw or render the output.
We can change the viewport using the function glViewport(glInt x, glInt y,Glsizei x, Glsizei y)
Here x,y are the co-ordinates measured from lower left of the screen 0,0
The viewport is dependant on the world co-ordinates set by glOrtho which are mapped into device co-ordinates.
Later the device co-ordinates are mapped to the viewport as pixellated co-ordinates using glViewport();

GlOrtho()
Since we brought in the GlOrtho() let me dig into it a little before progressing further. To understand
this function let us look at the syntax for this
GLvoid glOrtho( GLdouble left, GLdouble right,
GLdouble bottom, GLdouble top,
GLdouble nearClip, GLdouble farClip );

If we were think of the 6 parameters above each as a single point then joining this could represent a cube(which it really does).It represents a box in virtual space like the one below and this becomes our viewing volume.


Any object outside the viewing volume will be clipped and not visible.

The viewing volume is flattened to represent a 2D screen.To explain this let us create a viewing volume of 2 for all points
glOrtho( -2.0, 2.0, -2.0, 2.0, -2.0, 2.0 );



Vertex
It is a point in space represented by x,y,z,w co-ordinates.It should be easy to understand x,y,z represents
a 3D point with +z into the screen and –z away from the screen.
W is the scaling factor ranging 0.0 to 1.0

Syntax: GLvoid glVertex2f( GLfloat x, GLfloat y ) or
glVertex3f( x, y, z )


There are different functions for defining a vertex using double, float and int type parameters and the only difference is the changing glVertex2d() or glVertex2i().

Additionally we can also create an array for the x,y,z separately and pass that as a vector parameter like the example below

static GLfloat leftVertex[] = { 1.0, 1.0, 1.0 } and then pass it to
glVertex3fv(leftVertex).



Geometric Primitives

All drawing done on screen using OpenGl will be from the shape primitives like Lines,Points,Line strips,Line loops,Triangles,quads etc.These are called the drawing modes and the stored as enum types in the OpenGl specification.The concept is simple we declare the co-ordinates within the primitive drawing mode and OpenGl will do the job of rendering the display.To reduce the theory and improve clarity clarity let us use an example to draw 2 points :

static float v[] = { 0.3, 0.7 };
glPointSize( 2.5 ); //specifies the point size or thickness
glBegin( GL_POINTS ); //What type of drawing to render using the vertices specified
glVertex2fv( v );
glVertex2f( 0.6, 0.2 );
glEnd(); //end of GL_point declarations




Or
static float v[] = { 0.3, 0.7 };
glColor3f( 1.0, 0.0, 0.0 ); //enabling color
glLineWidth( 2.5 );
glBegin( GL_LINES );
glVertex2fv( v ); //defining a vertex using array syntax
glVertex2f( 0.6, 0.2 ); //defining a vertex using x,y syntax
glVertex2f( 0.3, 0.2 );
glVertex2f( 0.6, 0.7 );
glEnd();



Transformations
I will delve deeper into transformation once you get comfortable with the basic idea of transformation here but to start with let us consider 2 major players involved when we are using a camera to photograph an object and try understand the same concept in the 3D space while using OpenGl.

Assume you want to take a photo of a person(ie.model in OpenGl).I have a camera and I start clicking.Next if we want to take the photo at a different angle then what are the options available to us??

1.Keeping the Person/Model stationary and moving the camera
i.I can change my camera position keeping the model stationary and probably go to the left,right,upwards or downwards and view the object differently even though the Person/Model has not moved.

ii.I can zoom in and zoom out the camera lens and have a change in the perspective view of the Model
This is PROJECTION TRANSFORMATION

2.Keeping the camera stationary and moving the Person/Model position
i.Moving the initial position on the person with respect to the stationary camera will change the way the camera views the person.

ii.Rotate/turn the Person with respect to his/her current position will change the view
iii.Moving the person closer or away from the camera will cause the same effect and the camera zoom in our zoomout.
This is MODEL TRANSFORMATION


This should give you a rough idea of how OpenGl works.I will add one more blog on the concepts which are essential to get started and then we will do all our work on the Iphone(source included) so that we do not get lost only learning the OpenGl API which anyway is better explained on the OpenGl website.

Wednesday, September 15, 2010

Processing Xml Webservices in IPhone

In this 2 part series I will show you how we can communicate to .NET XML web service(SOAP and REST types) using XCode. The detailed document is available for download HERE

Today I will discuss how to communicate to a SOAP based web service in your IPhone applications.We should keep in mind the fact that unlike .Net or Java where consuming web service is just a matter of a few clicks and the IDE's do the work of creating the proxy code for you,in XCode we have to manually do the web service configuration,making http/https calls and parsing the resulting Xml.

I would not be teach you creating .Net web services here but in my example I have a web service method named getPointsHistoryByCustomerID which takes an input value of a customer id,queries the database layer and returns the associated Customer profile as XML.

Refer to the PDF Document for details of the whole process along with the configuring/coding NSURL and NSURLConnection objects and using the delegates which are essential for request/response processing :

- (void)connection:(NSURLConnection *)connection didReceiveResponse:
(NSURLResponse *)response

- (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data

-(void) connectionDidFinishLoading:(NSURLConnection *) connection

-(void) connection:(NSURLConnection *) connection didFailWithError:(NSError *) error

Monday, September 6, 2010

Core Data -1

When it comes to data management,we would generally use a file or a Sqlite Db to maintain persistent data.We would manually write the algorithm to save data/read to/from the persistent store into the memory and back.

Core data provides us a pre-packaged framework to create Models/Entities and API to persist
them on our devices.

When we create an Iphone App and check the coredata option the following 3 references are created in
the Application Delegate :

-NSManagedObjectModel *managedObjectModel;
- NSManagedObjectContext *managedObjectContext;
- NSPersistentStoreCoordinator *persistentStoreCoordinator;


There are 3 Layers in CoreDate architecture :
1. NSPeristantStoreCoordinator
    a.Responsibile for the direct link with the underlying physical store using NSURL
                    NSURL *storeUrl = [NSURL     fileURLWithPath:[[self applicationDocumentsDirectory]
                     stringByAppendingPathComponent: @"db.sqlite" ]];
                    
2.NSManagedObjectModel
        It contains the metadata for the model which in turn will contain the entities.
        The entity can be visualized as a class file which will replicate the database
        object in memory
   
3.NSManagedObjectContext
        The Managed Object Context(MOC) is used as a scratch pad.
        Objects are pulled through the stack into the
        MOC and then kept there while we change them. All inserts, deletes and
        updates to the set of objects in the MOC are held until we tell the MOC
        to save. At that point the MOC’s list of changes is pushed down through
        the stack, at each step translated closer to the eventual language of the
        POS(Persistant Object Store eg.File/SqlLite where it eventually becomes native
        (i.e. SQL statements for the SQLite POS) and sent to the persistent storage.
       
       
A.    Steps involved in creating a PersistentStoreCoordinator

    -(NSPersistentStoreCorordinator *) persistentStoreCoordinator
    {
        if(!persistentStoreCoordinator) return persistentStoreCoordinator;

        NSURL *storeUrl =[NSURL fileURLWithPath:[[self applicationDocumentDirectory] 
                stringByAppendingPathComponent:@"db.sqlite"]];

        NSError *error=nil;
        persistentStoreCoordinator =[[NSPersistentStoreCorordinator alloc]
                        initWithObjectModel:[self managedObjectModel]];

        if(![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType
                    configuration:nil
                    URL:storeUrl
                    options:nil
                    error:&error])
        {
            //handle errors
        }

        return persistentStoreCoordinator;
    }

    (NSString *)applicationDocumentsDirectory {
        NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask,YES);
        NSString *basePath = nil;
        if([paths count] > 0)
        {
            basePath = [paths objectAtIndex:0];
        }
        return basePath;
    }

B.     Steps involved in creating the ManagedObjectModel which will store all our entities
    -(NSManagedObjectModel *) managedObjectModel
    {
        if(managedObjectModel!=nil)
            return managedObjectModel;
       
        //This will create one MOM for all model files in our application
        managedObjectModel=[[NSManagedObjectModel mergedModelFromBundles:nil] retain];
       
        return managedObjectModel;
    }
   




C.    Steps involved in creating the ManagedObjectContext
    i.  Get the persistent store coordinator
    ii. Configure the ManagedObjectContext
   
    - (NSManagedObjectContext *) managedObjectContext
    {
        if (managedObjectContext != nil)    
            return managedObjectContext;
       
        NSPersistentStoreCoordinator *coordinator=[self persistentStoreCoordinator];
        if(coordinator != nil)
        {   
            managedObjectContext=[[NSManagedObjectContext alloc] init];
            [managedObjectContext setPersistentCoordinator:coordinator];
        }
        return     managedObjectContext;
    }
  

Wednesday, March 24, 2010

Location tracking in IPhone

IPhone provides you the ability to get your location and relies on cell tower triangulation as well as a built in GPS.
In this tutorial I will demonstrate you how to find you Location using a small IPhone App.
You can Integrate a MKMapKit/MKMapView to enhance this to make the output more graphical.
I will try to provide a small demo on that if I get time later.

Add the Core Location framework to your project to use the location data.
Add a few UILabel controls for displaying the longitude and latitude.For updating the labels we will need the associated
IBOutlets to these labels.If you are not familiar with IBAction/Outlets then refer to the apple documentation.
The sample code is as below


#import "<UIKit/UIKit.h>"

@interface MyLocationViewController : UIViewController {
  IBOutlet UILabel *latitude;
  IBOutlet UILabel *longitude;
}

@end


We will use the CLLocationManager class to send updates to a delegate when our the location changes.
The delegate associated to this class is CLLocationManagerDelegate so our view controller should implement
this delegate.

#import "<UIKit/UIKit.h>"
#import "<CoreLocation/CoreLocation.h>"

@interface MyLocationViewController : UIViewController < CLLocationManagerDelegate >
  {
  CLLocationManager *locationManager;
  IBOutlet UILabel *latitude;
  IBOutlet UILabel *longitude;
}

@end

In the viewDidLoad of the Viewcontroller configure the CLLocationManager settings like distanceFilter,desiredAccuracy etc and then call the startUpdatingLocation function of the CLLocationManager to get the delegate working.

- (void)viewDidLoad {
  [super viewDidLoad];
  locationManager = [[CLLocationManager alloc] init];
  locationManager.delegate = self;
  locationManager.distanceFilter = kCLDistanceFilterNone;
  locationManager.desiredAccuracy = kCLLocationAccuracyHundredMeters;
  [locationManager startUpdatingLocation];
}

From the  delegate protocol implement the following delegate in your viewController

- (void)locationManager:(CLLocationManager *)manager
    didUpdateToLocation:(CLLocation *)newLocation
           fromLocation:(CLLocation *)oldLocation


The full body of the implementation will be

- (void)locationManager:(CLLocationManager *)manager
    didUpdateToLocation:(CLLocation *)newLocation
           fromLocation:(CLLocation *)oldLocation
{
  int degrees = newLocation.coordinate.latitude;
  double decimal = fabs(newLocation.coordinate.latitude - degrees);
  int minutes = decimal * 60;
  double seconds = decimal * 3600 - minutes * 60;
  NSString *lat = [NSString stringWithFormat:@"%d° %d' %1.4f\"",
                   degrees, minutes, seconds];
  latitude.text = lat;
  degrees = newLocation.coordinate.longitude;
  decimal = fabs(newLocation.coordinate.longitude - degrees);
  minutes = decimal * 60;
  seconds = decimal * 3600 - minutes * 60;
  NSString *longt = [NSString stringWithFormat:@"%d° %d' %1.4f\"",
                     degrees, minutes, seconds];
  longitude.text = longt;
}
Here we take the location and turn the degrees into degrees, minutes, and seconds then display the value in the correct label.
For the mathematical calculation remember Each 1 degree= 60 minutes = 3600 seconds. 

Getting a hang of delegate feature in IPhone is the core to this end of programming so master that first and then get into some serious IPhone programming.