(Deep Sea Game) Memory Problem with Mobile Implementation by Legolas Wang

You have to think about memory if you ever want to make a mobile game. Well, at the very beginning, I did not realize that the game will hit the memory allowance that quick. Yet the memory problem appears right at the beginning stage of the Deep Sea.

The original plan was to make Deep Sea a mobile game, with each room be built on an individual scene, linking by the gate which loads the connected room. However, due to the fact that we use highest quality 3D assets, this limitation was hit quicker than expected. The first few built on the iPhone X was successful, when using two particle system and around 10 items. However, by the time of running, the runtime size already reaches around 1.2GB, which surprising, is even large than the asset itself.

Running on actual device makes me realize that the profiler usage on unity could not reflect the real usage on mobile device at all. The real usage is almost twice as big as it shows on the profiler. Thus, the game is very limited with the amount of assets due to the memory limitation.

Then I did a search and trying to figure what really happened. Since I got a memoryNotEnough warning from Xcode when I run later builds, I start digging into what happened. It turns out, that iOS is actually quite smart, it’ll warn you twice with this warning before it kills the game. At the third time of warning, the game will be killed immediately.

That’s the very reason why I see a consequent three warnings and the game is gone. Based on my search, the max memory allowance for iOS device is around 1 gigs. With modern 7, 8, or X capped at 1392MB. That is not a lot, and its reasonable since the mobile needs to take energy into the formula. The thing I leant is, that if you want to design for mobile, you have to be realistic about the app size, as well as memory usage.

The good developer need to balance the amount of loading screens and the visual details. You may consider the advancement of the mobile devices, but for now, the future is not all there yet. And the common consumers/users need to be considered and take care of, thus, think again about memory usage and do a lot of prototyping before you start, you’d better know the limit earlier before it’s too late many time wasted.

(Deep Sea Game) Real AI in Game using IBM Watson by Legolas Wang

Yep, there is an AI in our game, and its’ just like AI in all other games. They perform like a computer, provide quests and feedbacks, make a good company to the player. However, all those things does not make it intelligence from my perspective. It’s more like a pre-scripted respond machine.

In my understanding, when you call something AI, it should be actually smart enough to at least understand some of the things you are talking. Like a smart chatbot or Siri, it understands the context of the conversation. Based outhouse thoughts, I started to search more some real AI that could potentially be implemented into the game.

The solution I found is IBM Watson, which to my surprise, also provide services for the game company. More surprising, it even have decent integration with unity engine. I did a search and it turns out that Watson API provides quite a few linguistic function, like understanding the sentence and make prediction. Or real out the sentence like human.

In my attempts, my idea is to use the Watson read out API to read the player’s name out into the conversation, instead of ignoring the player’s name entirely. However, due to the stage of current technology and perhaps my limited understanding of the API, the result is quite disappointing, the audio feels very robotics, in a way that it sounds just strange. And it cannot match our voiceover’s recordings.

It turns out that human’s recording, at the time I’m writing this article, is still better than one of even the best AI out there. But from this experiment, I do see the potential there. I deeply believe some day in the near future, the game AI will no longer just be a respond machine, but an actual intelligence that could under player’s intend and response accordingly.

(Deep Sea Game) Visual Scripting by Legolas Wang

I think, in a way, people underestimated the power of visual scripting. Well, certainly I did not. In this article, I would like to share my opinion about visual scripting and how it is used in the Deep Sea game.

From time to time, I read comments about people speaking down about visual scripting in the unity environment because it seems like cheating. However, I think those people overlooked the benefit of time saving and ease of debugging when you could do things visually. Traditionally, if you want to hard core some mechanic, the steps are to manually created and linked some scripts, linking the components you want in all scripts, and the last thing you do is to write the actual mechanic.

With the help of visual scripting, you could put your attention of what logic you actually want to get, and save hours of time coding repetitive contents. For the graph like below, I’m using a visual scripting tools to handle the door locking mechanics in game, which jumps between different stages. Using visual approach allows me to tweak the logic any time without the hassle of working modification codes on multiple places in codes. This, in a way, gives me more confident to try different setups to find the most appreciate one.

That is not to say the visual scripting is without problem. As people also said, the visual scripting has limitation in the code support, because there are only certain things available in the visual scripting. It is in a way, true, but here is the actual power of visual scripting. You could always use it when it is appropriate, if not, just switch back to the hard coding.

Those approach allows me to achieve and play around mechanics very quickly and efficiently. In the mean time, only write code when I absolute needs to, to achieve unique effect. Personally, this will continue to be my No.1 approach to do things within Unity. Yet the exploration of finding better ways to do things will never stop.

(Deep Sea Game) Balancing the Pacing of the Game by Legolas Wang

Balancing the pacing of the game is hard and tricky. At the beginning of game design, I always thought that the good game is designed, by very intelligent people. It’ll just work well once published, if it is designed carefully.

Where to be hard, where to be easy, those should all be designed and it will just perfectly balanced. It turns out to be false, what I learnt thought the development is that the good game mechanics, pacing, and even balancing requires strong playtesting. You can never predict anything just by deign and play it yourself. At certain stage, it is mandatory that you bring some outside helps.

Those playacting are like putting a magnifier to your game, every annoying deign decision, imbalance, confusing just got nowhere to hide. It is best if every game can be tested over and over agin just to get things to the right spot. Where it’s not too hard to prevent the players from progress or too flat to make it interesting.

(Deep Sea Game) The Reason behind Remaking the Game by Legolas Wang

We remake this game, not only once, twice, but also three times now. However, I do not feel regret for using these times to test things out. Actually, every single version of failure teaches me a lesson in some way.

In the first make, the focus is on the visual perspective and it looks stunning. But the game feels empty, thus, we revisited on the game design after some discussions. In the second remake, we tested out an all new mechanics puzzle which later turns not feeling intuitive. Those are the stages where I feel very meaningful.

Updates On Chinese Articles by Legolas Wang

I'm obsessed with extraordinary software, and I wish to share the great experience using them to as many people as possible. 

That is why I started writing Chinese articles, to bring relatively less known applications and techniques to my home country. 

For the past three months. I have written over 45 thousand words in Chinese of articles ranging from macOS techniques to specialty app like Pixelmator Pro. Because of the high-quality articles I keep delivering, now Inadvertently, I become a part-time columnist for sspai.com. I put a link to my articles under my website's 'Chinese' section. 

 

Raw file for the video? Not Quite, It’s Called ‘Log’ by Legolas Wang

The raw format preserves more data for our photography and expands our ability to create post photo effect. But could we do the same thing for the video? Is there any video format equivalent to the raw file for the still photos?  The answer is yes. 

Based on my research, there is indeed a raw video format. However, it’s only limited to the ‘BlackMagic Design’ camera. There is an excellent article talking about that ‘raw’ format on ‘B&H Explora([Video Workflow: Using RAW Files | B&H Explora](https://www.bhphotovideo.com/explora/video/tips-and-solutions/video-workflow-using-raw-files)).’ But that’s not what I’ll discuss here. Since the ‘raw’ format for video is not universal to most DSLR cameras.  

I’ll talk about ‘log,’ which is necessarily a log curve which has been specially optimized for digital motion picture cameras to maximize the performance of the image sensor. Modern camera sensors are capable of capturing a vast amount of information. However, the existing video formats can’t support them. Our camera has to run a series algorithms to make the video meet the ITU requirement thus created a reduced dynamic range and color information. 

The Log gamma curve is designed to record and transmit as much of the information recorded by the sensor of the camera as possible. Thus, you have greater control over what color looks like in color grading process.

A different brand has a different name for this ‘log’ curve. Here is a list that allows you to find your match:  

Presented by Legolas

One huge advantage when shooting in ‘Log’ is that the information in both shadows and highlights can be both preserved, which allows you to capture a broader range of tones. Another advantage is wider color gamut. It will enable your image to be more vibrant and realistic. 

Most of our monitors are the REC.709 monitor. Thus the log video will look flat on those monitors. One important thing that you need to remember is that you have to color correct the footage to fit REC.709 before sharing it. The ‘3D LUT’ is designed to simplify this process. 

For those who do not want color correction but want even better footage, you could also achieve some decent cinematic looking by downloading creative LUTs and directly applied to the log footage you take. Just remember to download the correct log file that matches your camera. 

If you want to learn more about log and those terminologies,

· There is an excellent manual provided by Sony: http://assets.pro.sony.eu/Web/ngp/pdf/an-introduction-to-log-shooting.pdf
· And color FAQ by ARRI Camera that talks about ITU Rec 709, Rec 2020, Colour Gamut, 3D LUT: [ARRI Group: COLOR FAQ](http://www.arri.com/camera/alexa/learn/color_faq/)