真TM不错,本来觉得是不是最近又电子yy了, 因为心灵杀手2一直玩不下去

直到战神5 英灵殿出来了

一开始听说是肉鸽,觉得刷刷刷不是太喜欢

玩了第一遍后,觉得哎哟!剧情还挺多为了看剧情,再坚持坚持

玩了第二遍后,卧槽,有点意思,奎爷要找回自己了,应该再坚持1把就可以通关了

玩了第三遍后,操,提尔完整版真jb难打

耐着手残党的性子打了第5遍通关提尔看了剧情后觉得想差不多弃坑了

抱着最后一把的心态把钥匙全买了,箱子全开了,把冷却,力量全加满

最后一把全开,爽歪歪,无限国度移动,6个技能轮番换一个CD立马就刷新,提尔他根本动不了,有点上头了。

10个小时候后,成就100%, 今年战神5 dlc 不但免费,诚意满满,肝度不高,超棒!

6月份的时候,苹果推出Apple VisionPro的时候,很激动,终于苹果家业强势进军XR了 (赶紧出个苹果折叠屏吧)

一直摩拳擦掌想试试为其开发APP的体验如何,当初苹果宣布和Unity合作推出开发套件时,立即就去申请Unity的测试申请单了,但是过了2个月迟迟没收到申请通过,不过上一周Unity终于宣布面向所有付费证书用户开发套件功能,花了2个晚上尝试了一波,心态突然变的很微妙。

说实话,想装个开发环境就花了我足足1天的时间。免费版证书的用户不要花时间装套件了,只面向Pro,企业等付费用户,不过你如果通过魔法打开了编辑器,也是无法安装package的,如果你本地碰巧有个同名packagecache那你也许可以试试…

1.首先官方宣称的稳定版适配VisionOS 为 2022.3.11f1, 用国内版为对应的 2022.3.11f1c1版本,下载时同时勾选 VisionOS Build Target

下载完成后,构建范例工程,提示只有silicon版本编辑器才可以build. 这个要求很合理。 但也是第一个大坑的开始,我花了30分钟都无法成功通过UnityHub 下载 Silicon版本的Editor. 就算直接去官网找到 silicon editor的下载按钮,只要点击用unityHub下载,打开的窗口也只有intel版本可选

最后我只能手动下载 silicon editor + visionOS build + ios build

我记得这个问题从苹果推出M1芯片后就没解决,3年后还是没解决。 (活该你Unity最近要求996)

2.由于苹果对VisionOS Pro的Shader有要求,仅支持 MaterialX版本 AR模式+窗口模式 (沉浸模式支持自定义Shader). 因为Unity也做了对应的适配,仅有Builtin URP Shader 和 使用Shader Graph创建的Shader , Unity会自动帮你用MaterialX做适配转换, 自定义URP Shader全不支持,即使是使用苹果声称的沉浸模式, 也不支持。

因此我直接去商店下载官方URP ShowCase, 一个森林的场景。 Build后,在VisionOS Pro中无法呈现任何画面,不过没有任何报错,也许是场景过于复杂了?

3.我决定使用另一个URP Demo, Unity-Chan模型来测试电子老婆,再粗暴替换Shader后,成功导入VisionOS Pro后,能看到老婆了!!!! 不过模型呈现bindpos状态, 上forum查询一番后,对于Animator, Unity的Volume Camera无法触发动画更新,必需修改Animator的Culling选项

修改成Always后,老婆成功动起来了

在AR模式下,我们可以把老婆自由拖动到合适的地方

也可以自动观察, 甚至你也可以使用新的Touch Input和老婆进行交互.

当然了,如果你要切换为窗体模式也很简单

取消勾选PolySpatial后打包即可。 但是这就失去了所有的乐趣了,是吧?

虽然开发套件目前还处于很原始状态,但足以让人产生很多想象力,不过目前真机价格较高,只能在模拟器里体验体验。

同时希望Unity好好加班,早早支持下自定义URP Shader,要不然旧项目迁移的成本太高了.

好了,不说了,我要去陪我的XR老婆了

又又年更了

最近无意中打开Instancing 的一个DrawCall查看

又又年更了

最近无意中打开Instancing 的一个DrawCall查看

发现里面有4个Buffer, 一直没有仔细看过这些东西的内容就好奇点开看看

Globals 里包含了 全局参数部分

比如光照位置,颜色,相机信息,阴影信息,时间等

UnityPerMaterial 里是自定义材质上的一些参数

UnityDrawCallInfo 是Instancing的一些Setup信息

然后最大的一个Buffer叫 UnityInstancng_PerDraw0

对于一个最简单的Shader来说,里面包含了

UnityObject2World * InstanceCount

UnityWorld2Object * InstanceCount

于是一个想法突然蹦出,对于最简单的Shader来说,没有用到 UnityWorld2Object, 那么是不是可以

将图中的12888 优化至 6444, 既优化一半

于是在优化之前先写一个Demo 用来验证这个优化的可行性和必要性

思路如下

1.使用MaterialPropertyBlock 来存储所有的M矩阵

2.移除Shader中所有的Unity相关矩阵的引用,这样在Shader编译时会触发相关Buffer编入的移除

3.测试当传仅传入 Object2World Array 和 传入 Object2World Array + World2Object Array 的性能开销

C#准备部分

Shader准备部分

测试场景 一个DrawCall 1023个Instance

左侧一个按钮用来控制 Buffer优化方案的切换

方案运行后在RenderDoc的 BufferSlot下,成功让UnityInstancing_PerDraw0 这个Buffer消失,同时替代的TestProp出现在列表上

点开TestProp 发现仅有上传的M矩阵

Demo准备完毕,开始打包真机测试性能

测试模型

使用SDP进行Snapshot分析

未优化时

优化后

差异部分 size 16384(无优化) size 8192(优化后)

AvgByte 减少 5%

ReadTotal 减少 3%

GPU频率 减少 5%

频率分析

切换方案后,成功得到一个三角形曲线,说明方案有效

将方案合入项目后,发现 GPU虽然得到了优化,但是CPU测的内存上升的非常快

且触发了Unity经典错误

Property (TestMat) exceeds previous array size (821 vs 2). Cap to previous size. Restart Unity to recreate the arrays.

对于这个经典错误可以简单通过Clear MaterialPropertyBlock 或者初始化创建一个1023的Array解决,不过需要耗费很多内存和GC

如果不每帧设置MaterialPropertyBlock 的MatrixArray,可以解决GC问题, 但是内存依然会升高

通过查看源码发现

InstancedRendering的时候,如果有传入MaterialPropertyBlock 就会触发 sourceData.CopyFrom(MaterialPropertyBlock)

所以只要使用MaterialPropertyBlock 就必然导致内存变大

并且发现Unity在源码中写死了Object2World的参数名

那么有没有办法不改变Object2World参数名的情况下解决MaterialPropertyBlock 的内存占用,又解决不额外传入World2Object矩阵的问题呢

实验了几次后发现只能放弃使用MaterialPropertyBlock

好在PerDraw Buffer的定义不在源码中,而是在外部的Package里

直接去修改

Packages/com.unity.render-pipelines.core/ShaderLibrary/UnityInstancing.hlsl

找到PerDraw Buffer的定义

将UNITY_DEFINE_INSTANCED_PROP(float4x4, unity_WorldToObjectArray) 移除,并修改相关引用部分

最后的结果就是不使用MaterialPropertyBlock, 使用一个自定义宏去切换UnityInstancing.hlsl里PerDraw Buffer的Layout,在自己的Layout里移除不用的部分

虽然蚊子腿小,但是蚊子腿也是肉啊

Done!

转一篇文章

BRANDON SHEFFIELD12 SEP 2023 • 6 MIN READ

https://insertcredit.com/opinion/unity/

Unity was once heralded as the savior of the video game industry. It was relatively easy to use, and provided an engine/framework for multiple games rather than just one. Even as late as the early 2000s, some game companies, especially in Japan, weren’t even sharing similar engines within their own teams! Before the popularity of commercial game engines, each game was built bespoke, which had some advantages, but took a lot of time, and made each port a chore. One of the greatest things Unity provided was a relatively easy pipeline to console releases, making ports more about platform quirks than full recoding.

I remember countless interviews I did as a journalist in 2005 where the interviewee said “now that there’s engines like Unity, things are getting easier.” My game company Necrosoft has used Unity for every commercial project it has ever made.

But now I can say, unequivocally, if you’re starting a new game project, do not use Unity. If you started a project 4 months ago, it’s worth switching to something else. Unity is quite simply not a company to be trusted.

What has happened? Across the last few years, as John Riccitiello has taken over the company, the engine has made a steady decline into bizarre business models surrounding an engine with unmaintained features and erratic stability.

I’ll talk about eroding features first. Unity has internal champions for its features. Once those champions leave the company, that feature languishes and falls apart. Unity buys competing products, and then if the owner of that product leaves the company, it is no longer supported. Unity has a “stable” version of its product called the “LTS” version. More experimental features are pushed to a beta which developers can use if they’re curious. Currently we are on an LTS version that requires us to open a blank page before doing anything else, otherwise the engine simply crashes. This is because of an error Unity introduced recently which they have not fixed. It adds a couple minutes every time we open the project, and is anything but stable.

The latest and final straw is this announcement, which you may have seen some developer friends talking about.

I’ll break down some of the most important factors in this discussion:

  • Unity personal, which is free, now cannot be used offline.
  • all tiers of unity now require developers to pay a set fee of a few cents for every game that’s installed
  • Unity plus (which is going away), and pro, and enterprise levels all cost a subscription, and most professional developers have to use it.
  • different tiers of downloads and income determine how much you pay.
  • Unity has never made money off subscriptions, it has always made money off its ads platform (which you see in f2p mobile games, etc)
  • There’s a discount if you use Unity Services.
  • This money is meant to help with runtime issues when they can’t even get their base stable version to run without crashing.

So the problem becomes this: they are already charging a subscription, and now a per install cost on top of that. What’s the point of the subscription if we’re also paying another charge on top of that? Why wouldn’t it be free at that point?

Also, it’s on developers to sort through these two types of costs, meaning Unity has added a bunch of admin work for us, while making it extremely costly for games like Vampire Survivor to sell their game at the price they do. Vampire Survivor’s edge was their price, now doing something like that is completely unfeasible. Imagine releasing a game for 99 cents under the personal plan, where Steam takes 30% off the top for their platform fee, and then unity takes 20 cents per install, and now you’re making a maximum of 46 cents on the dollar. As a developer who starts a game under the personal plan, because you’re not sure how well it’ll do, you’re punished, astoundingly so, for being a breakout success. Not to mention that sales will now be more costly for developers since Unity is not asking for a percentage, but a set fee. If I reduce the price of my game, the price unity asks for doesn’t decrease.

This all comes out of the developer’s pocket, but publishers won’t want to be on the hook for it either – expect fewer games to be pitched with Unity going forward.

And NOW consider bundles. Each install or activation counts as something that must be paid. We did the itch.io bundle for ukraine. That was approximately a million games given out per developer. If all those were installed, developers would owe $200,000 just for having given their game to charity. It makes charity bundles completely unfeasible. We have 800,000 copies of our game Gunhouse that are owned but not yet installed. Will we be on the hook for thousands of dollars for a game we donated to charity?

[edit: they have since walked back the idea that charity bundles would be on the hook for this, but there’s no method of tracking which games came from charity bundles and which simply came from that platform through purchases, so this is bogus. I have it on good authority that they just today learned what charity bundles are and how they work.]

On top of this there is no minimum cooldown period after they change their fee, so they can raise it as they like, whenever they like. Effectively we’re all locked into an upward-only, per-install pricing model that can change whenever Unity decides they need extra revenue to make their stock look healthy. No developer would have decided to use Unity if this was the business model from the start.
It proves that they’re willing to completely change things up with no notice. January is barely 4 months away, and this decision will affect titles that have been in development for years, and haven’t factored this into their budgets. If we had the option, we’d change now, but we’re too far into development. Unreal Engine, to their credit, only holds you to the EULA you signed up for, not whatever they’ve decided most recently.

I want to double down on this point – we did not sign up for this. Frankly the Unity Plus tier, which we currently use, which is going away presently, wasn’t even around when we signed up. Oh, and did I mention we’re automatically being switched to the more expensive Pro from Plus if we don’t cancel our subscription? If the agreement changes underneath you as you’re making the game, you can’t budget for it, and trust is completely lost. We did not plan for this, and it screws us massively on Demonschool, which is tracking to be our most successful game. You might say poor you, but again, we did not sign up for this and have no option to say no, since we’re close to release and this change is 4 months out. You can’t simply remake an entire game in another engine when you’ve been working on it for 4+ years.

It is clear that Riccitiello never considered that people might make games for reasons other than money. At least the attitude is consistent.

There is one other critical element here – developers are being offered a reduced price if they use Unity Services. This is basically monopolistic and anti-competitive, and they will likely get sued over it. At the very least it certainly won’t encourage any technology companies to make services for Unity. What’s the point when Unity will undercut you and still force devs into licensing their own product? Expect far less offerings from third parties in the future.

Ultimately, it screws over indies and smaller devs the most. If you can afford to pay for higher tiers, you don’t pay as much of this nickle and dime fee, but indies can’t afford to on the front end, or often it doesn’t make sense in terms of the volume of games you’ll sell, but then you wind up paying more in the long term. It’ll squash innovation and art-oriented games that aren’t designed around profit, especially. It’s a rotten deal that only makes sense if you’re looking at numbers, and assume everyone will keep using your product. Well, I don’t think people will keep using their product unless they’re stuck. I know one such developer who is stuck, who’s estimating this new scheme will cost them $100,000/month on a free to play game, where their revenue isn’t guaranteed.


Unity is desperately digging its own grave in a search for gold. This is all incredibly short-sighted and adds onto a string of rash decisions and poorly thought through schemes from Unity across the last few years. It’s no wonder Riccitiello sold 2,000 shares last week.

(Okay his selling shares was very likely part of a normal planned sell-through package since executives can’t dump stock all at once, but I wanted to be mean and the timing is funny, so there it is.)

封面来自霍格沃茨之遗

前不久我的Windows开发机炸了,手上只有一台公司配发的MacBook M1 Pro

习惯在Windows下用RenderDoc调试的我,突然就懵逼了

首先拒绝UnityEditor自带的FrameDebugger,这货在抓URP相关的Event经常会抓不到指定的Tick表现为层级试图闪烁,在暂停编辑器的情况下只能相对改善,无法解决。

接着打开RenderDoc主页没有MacOS的版本下载

脑中闪过自己拿源码编译一个Mac版本,但是似乎RenderDoc不支持Metal截取,直接放弃

那么就剩下2个方案

1.编写NativePlugin,嵌入MTLCaptureManager相关代码

2.UnityEditor发布项目到XCode工程,在XCode Debug工程时FrameCapture

但是这2种方案都非常繁琐,是否有直接在编辑器上截取调试的方法呢。

摸索了一番,发现有的,但是有坑!!!

官方文档

Xcode frame debugger Unity integration

翻译过来

1.创建一个空工程

2.Scheme 打开Info标签

3.设置可执行文件为UnityEditor

4.根据文档开启FrameCapture

5.设置启动参数,如果有UnityHub需要特别设置

我们照着做一遍

1.打开XCode创建一个XCodeProject

选择创建一个 MacOS Command Line Tool

产品名称随便取一个,语言选择C++

之后选择工程保存路径,之后就打开工程

2.略

从第3步开始就是坑了

如果你是M1 芯片的Mac 那么一定不要下载SILICON的版本,一定要下载Intel的版本

M1 配合SILICON版本的编辑器虽然可以提升性能,省电。 但是无法触发截帧!!!!!

第4步也有坑

GPU FrameCapture的选项不要选择Automatically,直接选Metal

第5步的坑估计只有国内版用户才会踩

国内用户启动编辑器必须通过UnityHub启动,文档说Executable设置为 UnityHub.app

错误的设置

实际测试 UnityHub.app 加上 启动参数 projectPath 是无法拉起项目的。而应该直接设置为UnityEditor.app

正确的设置

最后设置下项目启动路径

点击Xcode启动项目,XCode会自动拉起Unity

同时在Game视窗出现了我们期待已久的“相机”图标

勇敢的点下去!

点击之后,UnityEditor会进入冻结状态,回到XCode会看到FrameCapture已经截取了一帧了

接下来我们试试XCode的修改ShaderCode代码功能

1.先找到绘制Cube的DrawCall

2.找到这个DrawCall用到的Fragment Function双击打开

在Shader中加入一行测试代码 让输出颜色仅仅输出r通道

然后点击“箭头“指向的刷新按钮,可以看到右上角的Cube立刻变成红色了!

调试完成,我们需要点击编辑器的“继续”按钮让UnityEditor解除冻结状态

大功告成

XCode的分析功能比RenderDoc要强大很多,包括各个Draw的功耗带宽耗时指令等等,非常全面。

前言
一不小心真的变成年更博主了,主要是真的挺忙的,还有目前在腾讯一直做虚幻引擎,很多程序方案真的不太容易分享,不像效果类的科普,教教大家工具的使用就好,程序类的需要源码的资产的组合展示,而虚幻这块资产导出受制于版本号,在海量源码中修修改改又不好抽出成Unity类似的package包,所以没时间整理这些。
索性把一些通用技术类的移植到Unity上来科普来的更有效率


上半部分1.0倍率下半,新鲜的Demo

视频为使用Unity 2021.3 使用 NativeRendering Plugin 在URP 管线下在 RenderFeature中 使用 Apple Variable Rasterization Rates(可变光栅化率)技术进行渲染

Apple是这么介绍这个技术的
“在复杂的3D应用程序中,每个像素都需要执行大量的计算来输出图像,以生成高质量的结果。 然而一旦你的渲染画面随着屏幕分辨率变大,那么渲染更多高品质像素的代价也越大”
“一种通用解决方案是对画面进行局部降分辨率,对画面中不太好注意到的地方,或者可接受的范围进行降采样可以节省大量的消耗”
试想一款赛车游戏,在画面中的玩家赛车保持1.0倍的缩放渲染,周围运动模糊的部分进行“直接”降分辨率渲染。
为什么要强调 直接 ?因为传统的运动模糊做法需要先对屏幕进行一次截取(Blit),然后通过Stencil,Depth等各种不同的遮罩方案区分出玩家赛车和背景,然后将截取出的画面进行模糊后和原始画面进行合并。这会触发一次RenderPass切换,并伴随着全屏幕的Load/Store,这在移动端上会大量的带宽占用,结局就是设备发热。 而可变速率渲染/光栅化 在drawcall绘制到某个tile时就直接降分辨率了,因此不需要额外的后处理。
在写这个文章的时候发现B站一个妹子也做了这个而技术的介绍视频,可以结合参考下

这个技术并不是苹果的专利
在PC上 DX12 PC/移动端上 Vulkan上也都有各自的实现,甚至是加强版,他们把这个技术称为 变分辨率渲染 VRS
他们的技术思想是对于原本点对点的像素采样,现在可以将周围像素组合起来仅仅采样1次,目前支持的组合方式有 1×1,2×2,4×4,2×1,1×2,4×2,2×4,4×4
Vulkan 支持 PerImage, PerPrimitive, PerDrawll级别的可变分辨率渲染
顾名思义就是支持整张画面将分辨率(后处理),PerPrimitive(三角形), PerDrawllCall(单一绘制)
而PerPrimitive方案特别适合大世界植被渲染,对于大范围的植被,我们不但可以做LOD级别的切换,还可以在同一个Instancing DrawCall中根据距离切换渲染分辨率,这样可以让玩家不那么明显的感知到LOD的突兀的三段式模型
对于这项技术的另一个使用,就是全面推翻了之前在手游中普遍使用的 “离屏软粒子”技术, 这项技术我最早实在Gpu Gems3 中看到的
这个技术目前在各大游戏厂商的手游中都普遍用到了
https://developer.nvidia.com/gpugems/gpugems3/part-iv-image-effects/chapter-23-high-speed-screen-particles
它通过将代价很大的半透明渲染,比如特效,粒子渲染到一个分辨率为原生分辨率 1/4,8/1的 RenderTarget上,用以减少Pixel计算数量,最后回贴到Surface/FrameBuffer上的一种技术。 但是这个技术的代价是需要一张相同降分辨率的Depth/Stencil Texture,切换RenderPass 等涉及到bandwidth的操作,前面提到的后处理模糊处理类似的开销,这些在使用 VRR/VRS技术后,都不需要使用了。
目前主流的高通/联发科芯片产商也直接在驱动中集成了这项技术,相信之后这个技术普及后能见到更多高质量画质,但是省电不发热的游戏 😀
目前高通支持这个技术的GPU为 Adreno 660及之后,MTK天玑系列最新芯片也支持。 但是因为我的安卓手机还没有如此现金,所以使用Iphone 来制作这个Demo, Iphone从 IOS13 起就支持 PerImage/PerRegion级别的 VRR了。
说了这么多,开始说下
制作流程
首先假定你
熟悉C++编程
熟悉Metal Shader Language
熟悉Metal渲染API
熟悉Objective-C语法
熟悉Unity URP/RenderFeature开发
熟悉Unity NativePlugin流程
渲染流程大纲
1.Unity中创建RenderFeature,使用该Feature注册任意一个事件用以开启RatemapRenderPass, 比如以 AfterOpaqueEvent
2.RenderFeature中使用CommandBuffer 调用NativePlugin创建一个原生RenderPass,并将Unity的ColorTarget,DepthTarget提供过去
3.RenderFeature中使用CommandBuffer进行任意渲染
4.RenderFeature中使用CommandBuffer 调用NativePlugin结束原生RenderPass并提交GPU
5.RenderFeature中使用CommandBuffer 调用NativePlugin创建一个BlitPass对降采样过的ColorTarget进行UpScale,并将结果返回给Unity
相关技术点
可以参考官方NativeRendering demo
1.直接在Asset/Plugin/IOS下创建基于Objective-c的文件
2.实现CommandBuffer支持的Native Callback函数
有 EventWithData(携带数据) 和 Event(仅发送事件ID int形) 两种
对用CommandBuffer调用的API为
#if (UNITY_IOS && !UNITY_EDITOR) [DllImport (“__Internal”)] #endif private static extern IntPtr GetRenderEventFunc(); #if (UNITY_IOS && !UNITY_EDITOR) [DllImport (“__Internal”)] #endif private static extern IntPtr GetRenderEventAndDataFunc(); CommandBuffer.IssuePluginEventAndData(GetRenderEventAndDataFunc(), (int)EventID, void* data); CommandBuffer.IssuePluginEvent(GetRenderEvent(), (int)EventID);
不能在CommandBuffer中调用如下声明的Native Function,那是在代码中直接调用,无法被CommandBuffer识别的
extern “C” void UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API SomeFunction
3.实现CommandBuffer和NativePlugin数据传递
Unity的资源比如 IndexBuffer,VertexBuffer,RenderTexture 实际上是上层抽象的资源,并非实际资源,但是Unity给了我们一个获取RHI资源的方式
RenderTexture.colorBuffer.GetNativeRenderBufferPtr() RenderTexture.depthBuffer.GetNativeRenderBufferPtr() Texture2D.GetNativeTexturePtr() Mesh.GetNativeIndexBufferPtr(); Mesh.GetNativeVertexBufferPtr(int SubMeshIndex);
这些数据被转换为一个指针,当我们传入NativePlugin时,可以用当前手机对应的渲染底层的资源类型去转换它
id<MTLBuffer> vertexBuffer = (__bridge id<MTLBuffer>)(vertexHandle); id<MTLBuffer> indexBuffer = (__bridge id<MTLBuffer>)(indexHandle); id<MTLTexture> srvTexture = (__bridge id<MTLTexture>)texturehandle; id<MTLBuffer> uvBuffer = (__bridge id<MTLBuffer>)uvHandle;
对了,在传递数据前,最后加上 GCHandle.Alloc 来保证这个数据所占用的内存不会被Unity这边回收
所以流程要多一小步
1.intPtr textureHandle = GCHandle.Alloc(RenderTexture.colorBuffer.GetNativeRenderBufferPtr(),AddrOfPinnedObject()); 2.id<MTLTexture> srvTexture = (__bridge id<MTLTexture>)texturehandle;
因为IssuePluginEventAndData 每次只能传递一个 void* data
那么如果你有很多数据要一起传过去就需要调用很多次这个API, 实际上有可以将数据打包成一个Struct进行一起传递
[StructLayout(LayoutKind.Sequential)] struct VRRPassData { public IntPtr colorTex; public IntPtr outputTex; public int validatedata; } GCHandle.Alloc(new VRRPassData(),AddrOfPinnedObject());
记住,一定要记得加上标签LayoutKind.Sequential 表示这是一段连续内存
然后我们在NativePlugin侧申明一个同结构的结构体进行转换
typedef struct { void* srcTexture; void* dstTexture; int validatedata; }BlitPassData; static void UNITY_INTERFACE_API OnRenderEventAndData(int eventID, void* data) g_BlitData = (BlitPassData*)data;
好了,NativeRendering 大致需要注意的知识点就这些
现在说说Metal VRR用到的API
1.MTLRenderPassDescriptor 用来描述一个Pass,包括ColorTarget,DepthTarget,以及是否使用VRR(通过赋值rasterizationRateMap属性的方式)
2.MTLRasterizationRateMapDescriptor 用来描述一个rasterizationRateMap
2.1 MTLRasterizationRateLayerDescriptor 用来描述Ratemap中层信息,以及划分区域,各个区域的渲染分辨率倍数
3.MTLRenderCommandEncoder Metal通过累计一帧中所有的渲染指令到Encoder后,最后调用EndEncoding进行指令编码为GPU可以理解的语言
4.MTLCommandBuffer 用来生成 MTLRenderCommandEncoder , 在一帧结束后,调用 [MTLCommandBuffer commit] 进行提交到GPU
创建Ratemap的代码很简单,官方给出的范例
MTLRasterizationRateMapDescriptor *descriptor = [[MTLRasterizationRateMapDescriptor alloc] init]; descriptor.label = @”My rate map”; descriptor.screenSize = destinationMetalLayer.drawableSize; MTLSize zoneCounts = MTLSizeMake(8, 4, 1);MTLRasterizationRateLayerDescriptor *layerDescriptor = [[MTLRasterizationRateLayerDescriptor alloc] initWithSampleCount:zoneCounts];
8,4,1 表示将屏幕分为横向8块,纵向4块, 1是占位值,Ratemap仅需要两个值,但是因为参数MTLSize构造函数需要3个值,所以第3个值填1即可,实际上用不到第3个值。
for (int row = 0; row < zoneCounts.height; row++) { layerDescriptor.verticalSampleStorage[row] = 1.0; } for (int column = 0; column < zoneCounts.width; column++) { layerDescriptor.horizontalSampleStorage[column] = 1.0; }
1.0表示使用 原生分辨率*1.0倍
layerDescriptor.horizontalSampleStorage[0] = 0.5; layerDescriptor.horizontalSampleStorage[7] = 0.5; layerDescriptor.verticalSampleStorage[0] = 0.5; layerDescriptor.verticalSampleStorage[3] = 0.5
对4个边角设置为0.5倍
[descriptor setLayer:layerDescriptor atIndex:0]; id<MTLRasterizationRateMap> rateMap = [_device newRasterizationRateMapWithDescriptor: descriptor];
最后使用Descriptor创建出真正的rateMap
firstPassDescriptor.rasterizationRateMap = _rateMap;
将rateMap赋值给 RenderPassDescriptor
id<MTLRenderCommandEncoder> commandEncoder = [MTLCommandBuffer renderCommandEncoderWithDescriptor:firstPassDescriptor];
通过RenderPassDescriptor 创建出这一帧用来接受渲染指令的RenderCommandEncoder
之后把让Unity的 RenderFeature进行正常渲染
渲染结束后,进行一个BlitPass 上采样输出一张ColorTexture贴合设备分辨率
准备工作(NativePlugin侧)
MTLSizeAndAlign rateMapParamSize = _rateMap.parameterBufferSizeAndAlign; _rateMapData = [_device newBufferWithLength: rateMapParamSize.size options:MTLResourceStorageModeShared]; [_rateMap copyParameterDataToBuffer:_rateMapData offset:0];
1.BlitPass 的Shader需要知道当前屏幕哪些部分被设置了什么样的倍数数据,因此我们要创建一个buffer来存储他们
[renderEncoder setFragmentBuffer:_rateMapData offset:0 atIndex:0];
我们将获取到数据塞入 Metal FragmentShader Buffer中,索引为0
typedef struct { float4 position [[position]]; } PassThroughVertexOutput; fragment float4 transformMappedToScreenFragments( PassThroughVertexOutput in [[stage_in]], constant rasterization_rate_map_data &data [[buffer(0)]], texture2d<half> intermediateColorMap [[ texture(0) ]]) { constexpr sampler s(coord::pixel, address::clamp_to_edge, filter::linear); rasterization_rate_map_decoder map(data); float2 physCoords = map.map_screen_to_physical_coordinates(in.position.xy); return float4(intermediateColorMap.sample(s, physCoords)); }
准备一个fragment shader, 其中 constant rasterization_rate_map_data &data [[buffer(0)]] 索引位置必须和[renderEncoder setFragmentBuffer:_rateMapData offset:0 atIndex:0]; 索引一致
constexpr sampler s(coord::pixel, address::clamp_to_edge, filter::linear); 中使用 coord::pixel 而不是 coord::normalized 表示我们需要使用真实纹理尺寸进行采样,而不是[0-1]的归一化uv坐标
之后在NativePlugin中进行类似Unity的操作 Blit(sourceTex, targetTex, Material)的操作即可
这里涉及到 MTLRenderPipelineDescriptor 的相关操作。 之后回到Unity就能得到正确的结果。
最终补充一个技术点,通过GPUFrameCapture可以发现,图中的白色渲染部分实际上仅占用原始RenderTarget对象(黑色区域) 很小的一个部分,我们可以通过 [Ratemap physicalSizeForLayer:AtIndex] 来获取到白色部分的真实大小,这样我们在创建这张RenderTarget时可以直接使用真实的缩放后的尺寸创建,减少多余内存的浪费。
由于本文不是MetalAPI教程,且假定你是熟悉对应API的同学,因此这里不展开介绍。 本文只是为Unity扩展可变速率渲染功能可行性的一种探索,及技术普及
相关项目文件之后会在github中放出,因为写代码时用了太多粗鄙词汇用在变量上,因此还在整理中。
 
引用

好久没有更新博客,距离上次分享技术相关有接近2年了。这两年积累好多很有意识的东西很想分享出来,但是因为加入了鹅厂的原因,然后又是在做3A游戏相关,技术上比较难以展开来分享,当然更重要的原因是鹅厂那个量子态的高压线啦,笑死。 不过最近会重回博客,做一些其他方面的技术分享,不过在这之前会先变成类似日志的更新。

黎之轨迹1最终章这么拖沓真的没想到,坑成这样。看了新出的黎之轨迹2的试玩,更坑,决定不买了。

神秘海域盗贼之海合集PC出来了,还是开这个坑吧,虽然之前Ps4上体验过了,但是毕竟PC上可以开画质模式+高FPS,DLSS等,还是值得再来一遍

最近项目中的小伙伴,打算在大世界地图上刷草,原本的计划是使用Terrain 的笔刷直接绘制
但是Terrain自带的效果,要么刷着刷着就卡死了,要么距离拉远就看不到了

而且实际上最终将草地还原到游戏中时,程序也需要提取草地信息,然后根据不同的硬件平台构建不同的渲染方案。

索性就给小伙伴写一个高性能的笔刷工具

未完待续…


目前网路上大部分的屏幕空间反射都是基于 延迟渲染管线,或者后处理流程来实现的。 其主要原因一个是它是基于屏幕空间的效果,同时对于反射方向还需要考虑反射物体的法线,射线触发方向。 最后的效果部分加上模糊处理等等 对于延迟管线和后处理流程也都顺手拈来。

但是走延迟渲染和后处理流程也有导致事情更复杂的情况,比如要处理自反射,对于自由摆放的反射物体筛选,手机端的性能适配等等。

本篇将基于 Unity 2019 URP Forawrd管线下快速实现一个屏幕空间反射

首先是快速场景搭建,放置1个Plane用作地面,2个Cube 和1个复杂模型(皮卡丘) 这些物件都以Opaque队列渲染
之后补充一个自定义天空盒,用于在反射不到任何物体时,以天空盒进行填充


之后在Plane之上摆放另一个等大且位置重合,近Y轴稍微提高(避免Z Fighting)的Plane,但是其渲染队列为Transparent,用于作为反射平面,之所以不复用之前的地面Plane直接做反射是因为我们接下来要利用 URP 中 CameraOpaqueTexture 一张在非透明队列渲染完毕后屏幕截图,以及CameraDepthAttachment 来制作反射, 而一个Transparent队列的Plane不会被上面2张渲染出来,同时可以取到地面Plane的深度

接下来就开始在这个Transprent 的Plane编写反射Shader了

屏幕空间反射的大致流程如下

1.根据相机方向和反射物件的位置,计算出视野到物体的向量,之后根据物体平面方向向量计算出反射向量
2.以视野方向跟物体相交的位置作为反射起点(反射像素将要填充的位置),以反射向量为方向进行步进, 在每次步进时投影回屏幕空间,进行屏幕空间深度碰撞检测
3.当发现屏幕空间深度和当前步进的检测点相交,或者接近 则视为碰撞成功,返回当前相交点的屏幕空间坐标(UV) 对 CameraOpaqueTexture进行采样像素回贴到反射起点, 否则继续步进,为了防止无限步进导致性能下降,设定一个最远步进距离,超过时,返回天空盒
Continue reading