[TTTA] Episode 1 Hello Triangle

In the first episode, let’s talk about drawing a single triangle on your screen. By analyzing a frame captured from a triangle application, let’s learn what need to offer to the operating system to render this blue triangleIn our inaugural episode, we’ll explore how to render a single triangle on your screen. By examining a frame captured from a simple triangle-rendering application, we’ll learn about the essentials provided to the operating system to achieve this.

Begin by downloading “S1-HelloTriangle.zip” from the shared drive and extract its contents. Open the frame using the “Frame Analyzer” to proceed.

You’ll notice the blue triangle in the center—the outcome of our rendering. On the left, the API log lists just one item, signifying that a single “DrawInstanced” API call was responsible for rendering our triangle on the screen.

By selecting this draw call, a new “Resource” window appears, showcasing three categories: Input(In), Execution(Exe), and Output(Out). These resources are crucial for understanding the data required for rendering.

We will through those resources one by one to find out what has been used in this rendering.

Starting with the Input section, we find two items. ‘B:’ denotes a buffer, followed by a unique SHA code. The term ‘VBV’ (Vertex Buffer Views) indicates that this buffer stores the vertices.

This particular buffer contains a trio of vectors, each comprising two components: Position and Color.

Position: A trio of float vectors, each within the range of [-1.0, 1.0], designating the x, y, and z coordinates. For our 2D example, all z-values are zero, with the x/y pairs forming a triangle onscreen.

Color: A quartet of float vectors, ranging from [0.0, 1.0], representing the ‘RGBA’ color values where, in our case, we have full blue with complete opacity.

Why do colors define in floating points? While designing the interface, ideally an application doesn’t need to know what hardware it needs to be executed. Thus, when rendered in an 8-bit rendering system, the result color will be in range [0, 255]; on a 10-bit rendering system (Also named HDR10), it means the final range will be in [0, 1023].

Why colors and transparencies are defined on the vertices instead of the surface/triangle? In a modern rendering system, a surface is always represented by 3 vertices, and in general the number of surfaces will be larger than the number of vertices. In this case, defining color and transparency information on vertices can reduce the amount of parameter savings. The color and transparency values of a vertex are reused for all surfaces that share that vertex.

Geometry Input: A visual result of the input vertices, drawing a triangle in a 3D space.

That’s the end of Input section.

In execution section, to simplify this episode, we are only focusing on Arguments and Shader(SH):

ID3D12GraphicsCommandList::DrawInstanced: This is the draw command used in this rendering process. Definitions can be found here. ID3D12GraphicsCommandList::DrawInstanced

VertexCountPerInstance: Number of indices read from the index buffer for each instance.

InstanceCount: How many instances/surfaces are there in this draw call. Here we only have 1 triangles.

StartVertexLocation: Sometimes the vertex buffer saves additional information for other draw calls. Here offer an offset to find the correct vertex

StartInstanceLocation: A value added to each index before reading per-instance data from a vertex buffer.

From the DrawInstanced command’s arguments, we ascertain the intention to draw a single instance represented by three vertices.

Shader codes of this rendering system, as a programmable input to define how to render a result.

SH: 2555136722384 Shader resource, with its HASH

HLSL(High-Level Shader Language): A shader language developed by Microsoft

VSMain: The main function of Vertex Shader

PSMain: The main function of Pixel Shader. This will be the final value of a Pixel

The full design and execution of the shader system is complicated, and will be gradually extended in this series, when the demo is getting more complex. To check the full DX12 pipeline system Pipelines and Shaders with DX12.

And that wraps up the Execution section! We’ll touch upon the Output results when relevant in future discussions.

To recap, we’ve covered:

  • Input: Vertex positions and color data.
  • Shader: Source code specifying VSMain and PSMain functions.

With these inputs prepared, we’re set to execute the “DrawInstanced” command:

DrawIndexedInstanced(3, 1, 0, 0, 0)

Executing this function draws the blue triangle on the screen, achieving our rendering goal.

Thank you for joining Episode 1! Your thoughts and questions are welcome in the comments below. Stay tuned for the next installment!

Triangle-To-Triple-A Series – Intro

Step into the realm of game graphics with our comprehensive blog series, designed to empower you with a fundamental understanding of the modern graphics pipeline and the essence of graphics API calls. As we traverse the theoretical concepts, our narrative will be lightly sprinkled with practical examples, aiming to show how applications converse with the operating system to paint pixels on the screen—no heavy coding, just the pure science of graphics.

Within this series, we’ll occasionally reference Intel’s Graphics Performance Analyzers (GPA), a tool that exemplifies these concepts in action, though our focus will remain firmly on the underlying graphics principles rather than the intricacies of the tool itself. By the end of our journey, you’ll not only grasp the foundational elements of game graphics but also appreciate how they coalesce to form the breathtaking visuals in contemporary gaming.

Join us as we peel away the layers of complexity in modern graphics, clarifying the dialogue between application and machine, and illuminating the path from the genesis of a graphic call to the lush, immersive environments that define today’s gaming experiences.


All the frame/stream captures will be used in the blogs can be found under my shared one drive:

https://drive.google.com/drive/u/0/folders/1c8vXyConNvgtM43VnboaAlXCCb8He0ZO


Hope you can enjoy!

*May use the assistant with ChatGPT in the blog writing, but will review before publish:)

Personal Hardware Configure/Collection

CPU: ADL-S 12900K

Board: Z690E

SSD1: Samsung 980Pro 2TB -> Linux system drive

SSD2: West Digital WDS100 1TB -> Windows system drive

SSD3: Samsung 870 EVO 2TB -> Shared drive, NTFS (Supported in Linux starts kernel 5.15)

Cooler: Cooling Master MasterLiquid 360 + LGA1700 toolkit

Power Supply: RM850X

GPU: Intel Arc A770(16GB)

RAM: Kingston Fury Beast DDR5 16GB x 2
———————————————————————————————–

Mouse: Logitech Super Light Pro
Keyboard: K100 GRB AIR Wireless

Camera: Logitech Brio 4K Pro

Speaker: Logitech Z207 (Connected to my display)

Main Display: Dell U3223QE

(Backup)Display: LC27RG50, Refresh Rate 60 to 240, 1K

Microphone: Snowball ICE

———————————————————————————————

Camera Set: Sony Alpha 7 IV

Camera Lens: Sigma 24-70mm f/2.8 for Sony

Camera Lens: Sony 70-200nm f/4 OSS II

Graphics Software Engineer work categories

After joining Intel for some months, I collect different categories of works and positions here, and get to know the different focus people are on. As an End-to-end validation engineer, I am happy that I get a chance to work with different teams.

“Graphics Software Engineer” I mean here is not just for pure “Computer Graphics Appilication”, but more for “GPU Software Engineer”. Shader processing, rendering or calculating is not only in graphics related feature but more like in everywhere, so I am putting all relevant parts here, as a brief introduction on what we are expecting when we are speaking about graphics driver.

First of all, as a software engineer, GPU drivers are the most important things we take care. Below are different component teams working on different parts of a GPU driver.

  1. Kernel Mode Driver: Focusing on OS kernels, enable the new hardware in general operations, including memory management, power management, workload scheduling, ect.
  2. User Mode Driver – Graphics: For graphics area, there are 4 major graphics API drivers: [Vulkan, OpenGL, DirectX12, DirectX(*before12)]. Vulkan and OpenGL are generic for both Windows and Linux. DirectX is a graphics API provided and maintained by Microsoft for Windows systems. DirectX series are the most commonly used graphics API in the most popular games. DirectX12 is separated out is mainly because the change is huge. DirectX12 to DirectX10 is like Vulkan to OpenGL.
  3. User Mode Driver – Media: While playing videos or using video editing software, the driver handles encoding and decoding process. Note that sometimes media driver is using graphics APIs as well.
  4. User Mode Driver – Display: The rendering results will need to be shown on the connected display. This part is focusing on display protocols and data synchronizations.
  5. Compiler: Shader codes need to be compiled to be recognized and executed. We all know that OpenGL and other shader languages are designed to be compiled during the execution of applications, because the application never knows what the hardware to be installed on, but we also have situations that shader codes need to be compiled ahead of time. And also, specially for DirectX, shader codes are compiled twice. 1 time during compilation, 1 time during the compilation (to DXBC) and the other to the specific hardware.
  6. User Mode Driver – OpenCL: OpenCL is a common API across different platforms. This is the very basic part to design some GPGPU applications. Although it exists in both Linux and Windows, more and more modern solutions are popping out to replace it.
  7. User Mode Driver – Specialized high performance computing component (HCP): GPU provider specialized component. CUDA for Nvidia, OneApi for Intel, ect.
  8. System Control: Some GPU devices provide interfaces to control hardware system, like overclocking. Intel Arc Control provides you the ability to adjust overclocking during real-time.
  9. AI: In most cases it is not a separate component, but most likely to be a part of other user mode drivers. E.x. DirectML for AI on windows, Cuda/OneAPI for Linux, ect

TOBE CONTINUE… Pre & Post Silicon, hardware validation, software component validation, software tools, end to end validation.

OpenGL Learning resources, beginner to master

Here are the learning resources I have been using for OpenGL in work & self study.

Learn as a beginner

Video Tutorial: (For domain knowledge)
The Cherno OpenGL series: https://www.youtube.com/playlist?list=PLlrATfBNZ98foTJPJ_Ev03o2oq3-GGOS2

The best OpenGL Tutorial so far. Explanations are clear and accurate, babysat me through the headache times:)

No need to follow every coding part as he was doing, since the version of OpenGL may be different. (My work is mainly focus on OpenGL ES). The main idea is understand how graphics pipeline works, understand the meanings of the main terms like VAO, VBO, EBO, and get familiar with common APIs for draw shapes or textures.

Website: https://learnopengl.com/

Best learning website with all needed knowledge and codes. Have been following this website to do all the practices. Each lesson includes detailed explanations and corresponding codes that I can directly have some hands-on test.

Learn from your favorite applications

Graphics technology is widely used in different kinds of applications, especially in games. To understand how your favorite application is rendering beautiful scenes on the screen, here are 2 main tools(Open source project) you can refer:

  1. Gapid

Github link: https://github.com/google/gapid

Document page: https://gapid.dev/about/

Powerful tool developed and maintained by Google.

2. Renderdoc

Main page: https://renderdoc.org/

Github link: https://github.com/baldurk/renderdoc

Document page: https://renderdoc.org/docs/index.html

Note that for both tools, only supported API can be captured. They have great support in some kinds like Android mobile games developed in Unreal or Unity, but not all applications are easy to captured.

After getting a trace, you can review the opengl function calls and see how they are drawn and rendered as the final result.

Where to find more sample projects?

If you google OpenGL examples, you will only find some old, simple and duplicated examples, even the https://learnopengl.com/ website can offer better examples. But if you want to draw something very fancy, with very complicated effects, there is a special kind of OpenGL ES called WebGL, which uses pretty much similar APIs like real OpenGL shaders but slightly different.

By opening WebGL applications, it will render the shaders in your browser(Or other platforms like VS codes). There is a website that stores many WebGL projects that you can easily see the codes and final results with real-time rendering on web pages:

ShaderToy: https://www.shadertoy.com/

Exmaple:

ShaderToy provides tons of beautiful projects that can be running on your computer by a single click! This is also a very good tool for shader coders that they can see the render scenes in real-time. Also, you can transform it easily back to traditional OpenGL shaders easily, by using WebGL shaders as fragment shader and putting a passthrough vertex shader.(Some changes on API is needed)

Those are some resources helping me a lot in my daily work and study. It’s 2021 and thanks to the internet I don’t need to buy a textbook or pay for a course. I Feel free to leave a comment if you find some other resources that really helpful in studying OpenGL or useful in your daily work. I will keep this page updated while I find new things that improves my skills.

Blog at WordPress.com.

Up ↑