AccidentalRebel.com

My name is Karlo and I'm currently employed as a Cyber Security Engineer. I have an interest in threat hunting and malware analysis. I also work on cyber-security related programming projects like malware analysis tools and remote access tools.

Chef Wars Postmortem -- What went wrong: Optimizing too early and too late

in chefwars, gamedev, mindcake, postmortem

Note: This is from a series of articles that outlines the things I've learned while making Chef Wars for 2+ years.

TLDR

  • There is more to the saying that "premature optimization is the root of all evil".
  • Instead of asking WHEN to optimize, it is more important to ask WHAT and HOW to optimize.

It is a well known adage among programmers that premature optimization is the root of all evil. If this is true then I must have been very close to the devil himself.

During the early months of development on Chef Wars I did my best to optimize my code as much as possible. We were making a big game and I wanted to have a stable foundation to build our game on. I obsessed over lots of things from the interconnection of the various systems to folder structures. I was happy with all that I've built, but sadly progress was slow.

I realized at this point that I was optimizing too prematurely. If I wanted to reach my milestones on time then I needed to change my approach. This means leaving the optimizations for later. When is later though? I figured that it makes sense to do it at the end when all the systems are in place.

All went smoothly until we reached Open Beta. The game was reported to be sluggish and almost unplayable which signaled the need to start optimizing. While I was able to optimize some parts, there were some that I could not optimize properly without undergoing a major change to the code. Sadly, rewrites were not an option as we were running out of time.

01

The profiler has been really helpful in catching performance problems.

Looking back it is easy to pinpoint what went wrong. I was optimizing too early, later changed my approach only to find out that I was already too late to optimize certain critical parts. I, of course, want to prevent this from happening again. So the million dollar question is: How does one determine when to optimize? How does one know when is too early and too late?

The complete version

I later learned that the famous adage actually has a longer version:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

02

From Donald Knuth's The Art of Computer Programming

Turns out there was more to the saying that completely changes the lesson to be learned. Breaking it down we can infer that the author is telling us that:

  • Too much obsession over non-critical parts wastes time.
  • Only focus on efficiencies that matter.
  • Optimize whenever possible, but not at the expense of the previously mentioned points.

So instead of asking WHEN to optimize, it is more important to ask WHAT and HOW to optimize. In other words, anytime there is a chance to evaluate if an optimization is needed, one needs to consider whether there really is something worthwhile to optimise, and if so, how to proceed in optimizing.

Answering the WHAT and HOW

Knowing how to answer the WHAT and HOW is not easy and requires both experience and careful planning to get right. The internet is a bit divided about this as nobody really knows the best answer. In spite of this, I was able to gather some helpful nuggets of wisdom during my research that are worth considering:

  • Be critical of what optimizations to use at each stage of the project. Determine how critical it is and if it can be done later.
  • If setting aside optimizations for later, make sure to prepare the code so that it would be easy to do so when the time comes.
  • Proper planning during the design stage can determine what to build and how to optimize in advance.
  • Measuring/profiling optimizations would reveal which are the most effective to use in the future.

Conclusion

There is a certain sense of pride in producing optimized and stable code. Sadly, this kind of perfection comes at a cost of time. The solution is to always consider at all times when, what, and how to optimize.

This may all seem overkill to worry about but after going through 2 years worth of development on Chef Wars, I know all of this is worth taking the extra effort to do right. I hope that what I've learned may also be of use to you.

[Our game is running better now and you could play it by downloading it on on Android and iOS. Also check out my previous postmortem where I talk about something that went right.]

Chef Wars Postmortem -- What went right: Having a Universe File

in chefwars, gamedev, mindcake, postmortem

Note: This is from a series of articles that outlines the things I've learned while making Chef Wars for 2+ years.

TLDR

  • All data in our game is contained in one excel file we call the "Universe".
  • Prototypes can be done on the Universe excel file itself
  • Iteration is easier as we only need to change one file.
  • We made a system that downloads changes from our server so players don't need to update their builds.

Before we started development on Chef Wars, Cliff, my co-founder and game designer for the team, already had pages of spreadsheets containing important values in the game. It's kinda like a game design document but in the form of tables, columns, and rows. This "Universe" file contained everything from stats, dialogue, competitions, locations, chefs, and enemies just to name a few.

01

*This file definitely gives a hint on what type of guy Cliff is.*

Having a list of all the data that will be used in the game has helped us visualize the scope and the systems to be built, especially in making prototypes. One time Cliff made a simulation of the battle system using his Excel mastery. The universe data is fed into this simulation (i.e. competition level, recipe power) and the expected values are displayed (i.e. judging result, rewards amount). This mockup allowed us to see how the battles play out and made the whole thing easier for me to understand and implement in the engine.

All the content of the universe file is then converted to the JSON format which is used directly by the game. Iterating on the game is easy because the file would just need to be converted again for the new changes to show up. The conversion process is done manually though using a CSV to JSON tool. I would have automated the process but didn't have the time to work on it.

02

*It's like the Matrix*

Initially, when we wanted to update some values, we would need to push a new build version that players need to download. We figured that this is too cumbersome especially if we really have some critical changes we want to get out as soon as possible. As a solution to this, we made a system where a master copy of the JSONs are saved on our servers. We can change the data from here and the game would automatically download the necessary files that we changed. This is a really great feature that has helped us push important changes without having the need for a new build. But it does require a lot of bandwidth especially if a lot of players request for the new data so we do it only when needed like on crash producing bugs.

As you can see, we've spent a lot of time making sure that our game is data-centric as possible and it benefitted us immensely. This approach has been so useful that we plan to use it on our future projects. And hopefully, after reading this, we've convinced you to try it out too.

[Check out how the universe has been transformed into a game by playing Chef Wars on Android and iOS. Also, be sure to check out Cliff's postmortem where he talks about the things we learned during our global launch!]

Temp Solution For When Text Copying Does Not Work in Emacs Under Windows Subsytem for Linux

in emacs windows linux

One of the problems I was having with my Emacs environment under WSL (Windows Subsystem for Linux, aka. Bash On Windows) is that I could not copy text from WSL Emacs to other Windows applications. Copy and pasting from Windows to Emacs works without any problems so it's weird it does not work the other way around.

I tried a lot of solutions from Google but none of them seem to work. There was an emacs package called simpleclip that worked but the results were not consistent.

I then realized that a temporary solution would be to make use of Windows' clip.exe command line utility which can bme seen below.

(defun arebel-set-clipboard-data (str-val)
  "Puts text in Windows clipboard. Copying to Windows from WSL does 
not work on my end so this one is a temporary solution.

This function is called from within the simpleclip package when copy 
or bgcopy command is issued."
  (start-process "cmd" nil "cmd.exe" "/C" (concat "echo " (replace-regexp-in-string "\n" "\r" str-val) " | clip.exe")))

It works quite nicely especially after integrating it with simpleclip. This would do for now until I find a better solution.

EDIT (2017-10-01): Turns out the original code could not copy a region with multiple lines due to the difference in carriage return characters. This is now fixed with (replace-regexp-in-string "\n" "\r" str-val).

Converting org-journal entry to org-page post

in emacs org-mode

Since my recent switch from Wordpress to org-page I wanted a way to convert my org-journal entries to org-page posts. Instead of copying each entry by hand and pasting to an org-page new page buffer I decided to make an elisp code that would do it automatically which can be seen below:

(defun arebel-org-journal-entry-to-org-page-post ()
  "Copy the org-journal entry at point and then convert it to a org-page new post buffer."
  (interactive)
  (if (eq 'org-journal-mode major-mode)
      (let ((headline-text (nth 4 (org-heading-components)))
        (entry-text (org-get-entry)))
    (funcall-interactively 'op/new-post "blog" (concat (buffer-name) "-" headline-text))
    (goto-char (point-max))
    (insert entry-text))
    (message "This function can only be called inside org-journal-mode.")) )

The function is simple and uses functions from org-mode and org-page.

  • First, it checks if the current buffer is in org-journal-mode
  • Then it gets the headline text and entry texts
  • It then calls op/new-post. It does it interactively so that it will trigger the prompts needed to populate the template. (Also notice that it takes the org-journal buffer name plus time as the blog post's org file name. This way I don't have to specify it.)
  • It then inserts the entry-text at the end of the buffer.

From here I am free to edit, commit, then publish.

It's working great. As proof this post you are reading right now has been made with the code above.

Converting org-journal entry to org-page post

in emacs

I needed a way to minify JSON files from Emacs so I made the short function below.

(defun arebel-minify-buffer-contents()
    "Minifies the buffer contents by removing whitespaces."
    (interactive)
    (delete-whitespace-rectangle (point-min) (point-max))
    (mark-whole-buffer)
    (goto-char (point-min))
    (while (search-forward "\n" nil t) (replace-match "" nil t)))

The function is very simple. First it deletes the whitespaces for the whole current buffer then removes every newline.

This effectively turns this:

{
    "glossary": {
        "title": "example glossary",
        "GlossDiv": {
            "title": "S",
            "GlossList": {
                "GlossEntry": {
                    "ID": "SGML",
                    "SortAs": "SGML",
                    "GlossTerm": "Standard Generalized Markup Language",
                    "Acronym": "SGML",
                    "Abbrev": "ISO 8879:1986",
"GlossDef": {


                          "para": "A meta-markup language, used to create markup languages such as DocBook.",
                        "GlossSeeAlso": ["GML", "XML"]
                    },
                    "GlossSee": "markup"
                }
            }
        }
    }
}

To this:

{"glossary": {"title": "example glossary","GlossDiv": {"title": "S","GlossList": {"GlossEntry": {"ID": "SGML","SortAs": "SGML","GlossTerm": "Standard Generalized Markup Language","Acronym": "SGML","Abbrev": "ISO 8879:1986","GlossDef": {"para": "A meta-markup language, used to create markup languages such as DocBook.","GlossSeeAlso": ["GML", "XML"]},"GlossSee": "markup"}}}}}

It works for my current needs but have not fully tested it yet. It works for emacs lisp buffers too.