søndag den 12. august 2012

iOS framebuffer

Many places on the web there are questions about how to get hold of the frame buffer on an iOS device. The answer is always the same: "There is no frame buffer". Usually the question reflects the wish to do fast blit'ing of bitmaps generated on the fly to the screen. That variant of the question is usually answered with a "Quartz is too slow for that" followed by some muttering about OpenGL, textures and quads while people wander off. I have not seen any working code coming from these answers.

While it is true that there is no direct access to the frame buffer on an iOS device one can get pretty darn close without resorting to OpenGL, and also with fairly good results (60 fps can be upheld provided your generator code is fast enough). Here is how:

QuartzCore has the ability to generate a bitmap object backed by user-accessible memory. So for off-screen purposes there is a way to generate a "frame buffer" in the form of a bitmap object. The creation function is documented here, and the following is an example of a function that creates this given a CGRect with width and height: CGColorSpaceRef csp = CGColorSpaceCreateDeviceRGB(); CGContextRef _context = CGBitmapContextCreate(NULL, (size_t)frame.size.width, (size_t)frame.size.height, 8, 4*(size_t)frame.size.width, csp, kCGImageAlphaNoneSkipFirst); CGColorSpaceRelease(csp); NULL means that Quartz will allocate the memory. This frees you from memory management and also allows Quartz to optimize the location. The last argument means that the bitmap is stored ARGB in memory (byte 0 is alpha, 1 is red, 2 is green and 3 is blue etc. etc). and that the alpha-component is ignored. This is done to improve performance, but it means that the image will always be opaque. Using the following code uint32_t *framebuffer = CGBitmapContextGetData(_context); you get access to the color components (4 bytes each) of the bitmap. the frame buffer is an array which is width*height in size, and where each row is following each other (i.e. the index to the array is row*width+column for a given pixel at location (row,column)). The array starts in upper-left corner.

The bitmap context is the first element of the frame buffer emulation. This can be converted to an image by: CGImageRef img = CGBitmapContextCreateImage(_context); The documentation will tell you that you can draw this image using CGContextDrawImage() to draw it in a UIView drawRect: method. This is where Quartz gets its reputation for being slow from - because that is not a very fast operation.

The key to a fast frame buffer-like object on iOS is to use a CALayer. This is the basic building block in Quartz on iOS, and is nothing more than a cached CGImageRef that the graphics subsystem can render (you can probably already see where this is heading). The documentation is not very clear on this but it is possible (and very easy) to access a CALayers backing store. It is done using the contents property. To assign a new CGImage to a CALayer you simply do this: CGImageRef img = CGBitmapContextCreateImage(_context); layer.contents = (__bridge id)img; CGImageRelease(img); The first line obtains a CGImage from the bitmap as described above. Second line assigns this to a CALayer content property. This property is of type id but any CFType (which CGImage is) can be cast to id. the __bridge indicates that the memory management must be handled by Objective-c from this point, hence the image must be released again (last line).

That is pretty much it. The key to keeping the speed up is to not allocate a new bitmap over and over, but only obtain a new img every time the bitmap is updated on-screen and then change the backing store in-between. CGBitmapContextCreateImage() is a low-overhead operation whereas CGBitmapContextCreate() is not. The content property can be updated from anywhere in your program and the layer will update on-screen accordingly next time the screen is rendered.

Last part of this is to add the CALayer that you use for your bitmap to some view using [view.layer addSublayer:layer]

All this can be wrapped up in a small CALayer subclass (). #import <QuartzCore/QuartzCore.h> #import <stdint.h> @interface RSFrameBufferLayer : CALayer // Class method to create a new layer with an underlying // bitmap. Both will have the size set by the frame + (RSFrameBufferLayer *)layerWithFrame:(CGRect)frame; // Same as above - (id)initWithFrame:(CGRect)frame; // Draw bitmap to screen - (void)blit; // Get the underlying context to use for higher-level // drawing operations in Quartz @property(readonly) CGContextRef context; // Get the raw "frame buffer" @property(readonly) uint32_t *framebuffer; @end The implementation is very simple: #import "RSFrameBufferLayer.h" @implementation RSFrameBufferLayer @synthesize context = _context; + (RSFrameBufferLayer *)layerWithFrame:(CGRect)frame { return [[[RSFrameBufferLayer alloc] initWithFrame:frame] autorelease]; } - (id)initWithFrame:(CGRect)frame { if (self=[super init]){ self.opaque = YES; self.frame=frame; } return self; } - (void)dealloc { CGContextRelease(_context); [super dealloc]; } - (void)blit { CGImageRef img = CGBitmapContextCreateImage(_context); Self.contents = (__bridge id)img; CGImageRelease(img); } -(void)setFrame:(CGRect)frame { CGRect oldframe = self.frame; [super setFrame:frame]; if (frame.size.width != oldframe.size.width || frame.size.height != oldframe.size.height){ if (_context){ CGContextRelease(_context); } CGColorSpaceRef csp = CGColorSpaceCreateDeviceRGB(); _context = CGBitmapContextCreate(NULL, (size_t)frame.size.width, (size_t)frame.size.height, 8, 4*(size_t)frame.size.width, csp, kCGImageAlphaNoneSkipFirst); CGColorSpaceRelease(csp); } } -(uint32_t *)framebuffer { return CGBitmapContextGetData(_context); } @end

Ingen kommentarer:

Send en kommentar