The first version of RV64X is targeted at supporting an OpenGL ES 2.0 feature set. This means that the fixed function transform and lighting and pixel processing of OpenGL ES 1.0+ is replaced by programmable shaders for the geometry/lighting/fragment operations with some severe limitations as compared to an OpenGL desktop implementation. A good reference for these features is found here:
1. https://en.wikibooks.org/wiki/GLSL_Programming
2. https://www.opengl.org/pipeline/article/vol004_2
3. http://glslstudio.com/primer/
4. https://developer.amd.com/wordpress/...penGLES2.0.pdf
5. http://people.csail.mit.edu/kapu/sig...SGcourseKP.pdf
A good description of the geometry pipeline is here: https://en.wikibooks.org/wiki/GLSL_P...ransformations
and Rasterization (using Barycentric Coordinates)
- https://en.wikibooks.org/wiki/GLSL_P.../Rasterization
- https://www.scratchapixel.com/lesson...rization-stage
- https://www.doc.ic.ac.uk/~dfg/graphi...csSlides08.pdf
- https://www.digipen.edu/sites/defaul...rasterizer.pdf
- https://fgiesen.wordpress.com/2013/0...n-in-practice/
- https://fileadmin.cs.lth.se/cs/Educa...terization.pdf
The following article from Daniel Koch and Nicolas Capens describes Angle, an emulation layer that implements OpenGL ES 2.0 on top of Direct3D
https://www.seas.upenn.edu/~pcozzi/O...ghts-ANGLE.pdf
GLSL functionality is described here: https://nehe.gamedev.net/article/gls...duction/25007/
An excellent reference for the OpenGL ES 2.0 state machine is here: http://openglinsights.com/pipeline.html
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In RV64X, the GLSL virtual machine is replaced by the RISC-V Vector ISA, which is far more flexible and expansive than GLSL, so much more functionality is possible.
For the Rasterizer Unit in RV64X we consider the rasterization of points, lines and triangles. So instructions for the following could be included:
1. Accessing a linear frame buffer with a specified format for rendering a point with specific properties - color, size, multisample mode, texture etc.
2. Rasterizing a line/vector/edge using a modified Bresenham Algorithm
3. Rasterizing a triangle using Barycentric Interpolation
So we will need instructions for:
1. Linear FB access (Addr = Base + XSIZE * (y-1) + x)
2. Bresenham parameter calculation (dx=x2-x1, dy=y2-y1, D=2*dy-dx)
3. Barycentric parameter calculation ((ya−yb)x+(xb−xa)y+xayb−xbya)
There are inherent advantages to NOT limiting shader programs to a standard GLSL virtual machine. In OpenGL ES 2.0, branches are only in the forward direction and there are limits on shader program size etc. By using a RISC-V Vector core as the shader "CPU", MIMD (as opposed to SIMT) shader architectures are possible. This opens up all kinds of possibilities for graphics, physics, AI and other applications.
For example, it would be easy to implement ray-tracing on this architecture because one shader core could do the intersection calculations, while another computes lighting while yet another computes material interactions and physics, all in a MIMD processor arrangement. Furthermore, a unified memory architecture can permit all cores to share the same intermediate data, with perhaps a transactional memory model.
Some of the possibilities such an architecture could enable are described in my blog here: https://nextgengpu.tech.blog/
1. https://en.wikibooks.org/wiki/GLSL_Programming
2. https://www.opengl.org/pipeline/article/vol004_2
3. http://glslstudio.com/primer/
4. https://developer.amd.com/wordpress/...penGLES2.0.pdf
5. http://people.csail.mit.edu/kapu/sig...SGcourseKP.pdf
A good description of the geometry pipeline is here: https://en.wikibooks.org/wiki/GLSL_P...ransformations
and Rasterization (using Barycentric Coordinates)
- https://en.wikibooks.org/wiki/GLSL_P.../Rasterization
- https://www.scratchapixel.com/lesson...rization-stage
- https://www.doc.ic.ac.uk/~dfg/graphi...csSlides08.pdf
- https://www.digipen.edu/sites/defaul...rasterizer.pdf
- https://fgiesen.wordpress.com/2013/0...n-in-practice/
- https://fileadmin.cs.lth.se/cs/Educa...terization.pdf
The following article from Daniel Koch and Nicolas Capens describes Angle, an emulation layer that implements OpenGL ES 2.0 on top of Direct3D
https://www.seas.upenn.edu/~pcozzi/O...ghts-ANGLE.pdf
GLSL functionality is described here: https://nehe.gamedev.net/article/gls...duction/25007/
An excellent reference for the OpenGL ES 2.0 state machine is here: http://openglinsights.com/pipeline.html
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In RV64X, the GLSL virtual machine is replaced by the RISC-V Vector ISA, which is far more flexible and expansive than GLSL, so much more functionality is possible.
For the Rasterizer Unit in RV64X we consider the rasterization of points, lines and triangles. So instructions for the following could be included:
1. Accessing a linear frame buffer with a specified format for rendering a point with specific properties - color, size, multisample mode, texture etc.
2. Rasterizing a line/vector/edge using a modified Bresenham Algorithm
3. Rasterizing a triangle using Barycentric Interpolation
So we will need instructions for:
1. Linear FB access (Addr = Base + XSIZE * (y-1) + x)
2. Bresenham parameter calculation (dx=x2-x1, dy=y2-y1, D=2*dy-dx)
3. Barycentric parameter calculation ((ya−yb)x+(xb−xa)y+xayb−xbya)
There are inherent advantages to NOT limiting shader programs to a standard GLSL virtual machine. In OpenGL ES 2.0, branches are only in the forward direction and there are limits on shader program size etc. By using a RISC-V Vector core as the shader "CPU", MIMD (as opposed to SIMT) shader architectures are possible. This opens up all kinds of possibilities for graphics, physics, AI and other applications.
For example, it would be easy to implement ray-tracing on this architecture because one shader core could do the intersection calculations, while another computes lighting while yet another computes material interactions and physics, all in a MIMD processor arrangement. Furthermore, a unified memory architecture can permit all cores to share the same intermediate data, with perhaps a transactional memory model.
Some of the possibilities such an architecture could enable are described in my blog here: https://nextgengpu.tech.blog/
Comment