Welcome to my Homepage!

My name is István Koren and I am a Research Assistant and PhD Student at the ACIS group of RWTH Aachen University and a developer of mobile apps who is passionate about getting around and talking any other language! :) I love to work on the edge of technology, so feel free to contact me to discuss about cool new ideas. I’m also up for any hipster joke you run into! :)

Blog Posts


boot2docker on Windows 7 with AMD CPU

Before you desperately throw your PC out of the window when trying to get boot2docker running on a Windows 7 machine with an m5a78l-m motherboard with an AMD FX 6300 CPU (yes, I am trying hard to feed the search engines with terms): Try activating the AMD-V virtualization in your BIOS which is available in the „CPU Configuration“ and is called „Secure Virtual Machine Mode“. Works for me! :)


GitLab Dashboard totally messed up after update? Try this. :)

After running the great update tutorials on gitlab.org the dashboard was totally messed up, a big „toggle navigation“ button appeared on the top and everything was misplaced and some files from the „assets“ folder couldn’t be loaded (404). As I remembered having problems with the assets before because the timeout on my Apache was too short, I reran the asset precompilations:

cd /home/git/gitlab
sudo -u git -H bundle exec rake assets:clean RAILS_ENV=production
sudo -u git -H bundle exec rake assets:precompile RAILS_ENV=production

Run this with sudo. Do a shift-reload in your browser and voilà it should work.


Installing GitLab on CentOS managed by Plesk UPDATE

This is how I installed GitLab 6.4 on a Plesk 11.5 system running on CentOS 6.5. For those of you who found this post via Google (or any other search engine…) please be aware and skim through the whole post as I tell about some tricks in chronological order. :)

Generally, I followed the guide on https://github.com/gitlabhq/gitlab-recipes/blob/master/install/centos/README.md. It’s probably the best guide I’ve ever followed for installing a system like GitLab that depends on multiple components! It reads great, explains all the steps and most notably doesn’t miss anything – well only if there is no Plesk. :)

First of all, I had to skip the database part as I already had a running version of MySQL on my server. So I just created the necessary database and its users in the Plesk Panel. For the vhost.conf part of the above guide it is important to know that these Apache config files are managed by Plesk and there is a special directory for them – changing /etc/httpd/conf.d/gitlab.conf will not work! Especially not if you’re working on a subdomain. The correct path for subdomains is /var/www/vhosts/system/FQDN/conf instead. There, add (or change) the vhost.conf or vhost_ssl.conf respectively. You will not need the <VirtualHost *:80>, as the content of these files is inserted in the particular section of an automatically generated configuration file.
For me, the content of my vhost.conf is:

ServerName subdomain.domain.tld
ServerSignature Off

RewriteEngine on
RewriteCond %{HTTPS} !=on
RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [NE,R,L]

This makes sure that whenever somebody tries to access your subdomain over http he gets redirected to the https version instead. This is a hint from https://github.com/gitlabhq/gitlab-recipes/blob/master/web-server/apache/gitlab-ssl.conf.

The content of my vhost_ssl.conf (derived from the official tutorial as linked above):

ServerName subdomain.domain.tld
ServerSignature Off

ProxyPreserveHost On

<Location />
  Order deny,allow
  Allow from all

  ProxyPassReverse http://127.0.0.1:8080
  ProxyPassReverse https://subdomain.domain.tld/
</Location>

# this part redirects to the unicorn instance running on localhost
RewriteEngine on
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f
RewriteRule .* http://127.0.0.1:8080%{REQUEST_URI} [P,QSA]
# from GitLab 6.5 above you need the following line otherwise you can't login to GitLab:
RequestHeader set X_FORWARDED_PROTO 'https'

# needed for downloading attachments
DocumentRoot /home/git/gitlab/public

#Set up apache error documents, if back end goes down (i.e. 503 error) then a maintenance/deploy page is thrown up.
ErrorDocument 404 /404.html
ErrorDocument 422 /422.html
ErrorDocument 500 /500.html
ErrorDocument 503 /deploy.html

LogFormat "%{X-Forwarded-For}i %l %u %t "%r" %>s %b" common_forwarded
ErrorLog  /var/log/httpd/logs/gitlab.example.com_error.log
CustomLog /var/log/httpd/logs/gitlab.example.com_forwarded.log common_forwarded
CustomLog /var/log/httpd/logs/gitlab.example.com_access.log combined env=!dontlog
CustomLog /var/log/httpd/logs/gitlab.example.com.log combined

Now all you need to do is reconfigure the vhost settings (this is the part where Plesk reads your custom config files and renders it together with its internal configs into a new one) and restart the server:

/usr/local/psa/admin/bin/httpdmng --reconfigure-all
sudo service httpd restart

This is a hint I got from http://faq.hosteurope.de/?cpid=14385.

Still, trying to load GitLab in the browser results in the following:
Proxy Error
„Proxy Error: Error reading from remote server“, phew! How to solve it? Run

bundle exec rake assets:precompile RAILS_ENV=production

I found this solution on https://github.com/gitlabhq/gitlabhq/issues/4859 and the keyword here is „Precompilation“. As GitLab needs to precompile its resources on first usage, the first request takes a bit longer…to long for Apache and a timeout is fired. With the above command you manually precompile the Web app and everything runs smoothly then. :)

In fact you may already open GitLab in the browser, but there is still one thing left. I tried to open the robots.txt which is loaded from the /public folder as per configuration file (see above). Before I was able to do this, I needed to run a simple

chmod 755 /home/git

Thanks to https://github.com/gitlabhq/gitlab-recipes/issues/122 for pointing this out!

Voilà! GitLab up and running…

[UDPATE 2014-02-23]:
After following the tutorial on updating from GitLab 6.4 to 6.5 I ran into the issue that I could not login via the Web interface. Obviously the user was redirected to an http resource that didn’t get https cookies. Following the advice on the GitLab Issues page adding this line to the vhost_ssl.conf fixed it:

RequestHeader set X_FORWARDED_PROTO 'https'

I changed the file excerpt above.
Besides I realized the Apache restart above was wrong for Plesk that’s why I changed it.


Trying out the Laika Inventors Kit

Finally I had some time around christmas to assemble my Laika Explorer Board that I backed last summer on Kickstarter (this was the first project I backed). I chose to go for the „Inventors Kit“ that contains a breadboard, cables, LEDs, potentiometers and motors among others. The board is connected to my Raspberry Pi via USB and it has its own power supply (5V) to drive the motors. Mainly I followed the tutorials on http://www.project-laika.com/. Unfortunately reading the tutorials is currently the only way of learning how to work with it as I did not find a proper documentation – the python manual is PDF slightly longer than one page.

First exercise was getting to run one of the LEDs supplied. Therefore I connected it to the digital output and let the current go back via a resistor. Later I connected one of the motors and finally I was able to control its speed with the potentiometer.
laika1

Next was assembling the 7-segment display:
laika2

That was pretty though, not because the task itself was hard but because this was my first go on electronics and I was worried to destroy the board. :) Unfortunately the kit doesn’t contain male/male cables for following the last step of the 7-segment tutorial and I had to look for a cable myself to power the display.
laika3

So far I’ve only programmed the board with Scratch which is totally easy. Next time I will try Python.

In case you’re interested: they started regularly selling the inventors kit, more information here:
http://www.project-laika.com/purchase-inventors-kit


Talk Announcement: XMPP – The Potential Heartbeat of Global-Scale Pervasive Computing

Tomorrow my colleagues from Technische Universität Dresden will give a talk at our faculty:

Informatik-Kolloquium XMPP from István

Abstract: The original vision of Pervasive Computing as an experience „refreshing as a walk in the woods“ has been quite over-stressed in the last years. Nowadays we already live in a world of personal device zoos (notebooks, pads, phones and even smart watches), immersive social interaction and more and more addressable things connected to this overall experience. But while each of the demoed use cases looks impressive by itself, there is yet no means of global-scale communication and coordination to connect these isolated islands comparable to HTTP in connection with HTML and other standards for the Web. We explain how the eXtensible Messaging and Presence Protocol XMPP is about to fill this gap and what is still missing. After a crash course in relevant XMPP basics we show current research and standardization work in XMPP for pervasive computing, social computing and the Internet of Things. We furthermore present research work within the Mobilis project at TU Dresden which adds session mobility and service-oriented XMPP development to these building blocks.

Details can be found here. I am very much looking forward to it! Hopefully we will have a great discussion afterwards.


Adding Awareness and Undo/Redo Support to the Realtime House of Quality App

After releasing the House of Quality App on my institute’s homepage and using it in a project with colleagues, I received feedback on what to improve. As the House of Quality we were building together had more than 40 customer attributes and over 20 engineering characteristics, it was very hard to navigate around all the cells while remembering which requirement we were working on currently. Hence the first improvements were some mouseover effects to mark the current row and column headers.

In the first version of my app, it was also very hard to follow collaborator’s changes; therefore one suggestion was to add some sort of visual hints on where exactly something was changed remotely. In the context of shared editing systems, this is called awareness. Thankfully the Google Drive Realtime API comes with some handy features that allow us to easily add awareness features: In the update event, the userId of the collaborator responsible for these edits is included. Through the Realtime API’s document object (not to be confused with the DOM document!) we can call getCollaborators() to get an array of collaborators. Now, the great part is that every user object has a predefined color that can be used to highlight the change in the DOM. In my case, I use the color for a simple highlighting effect included in jQuery UI: effect("highlight", {color: color}, 2000);.

In the following figure, you can see the highlights for the current cell (orange) and a freshly arrived change by a remote collaborator (green):

Awareness in the House of Quality App

To add awareness about current collaborators, I also added a small hideable box on the top-right of the screen to show a list of users having this document open together with their Google profile pic and awareness color.

While answering to user requests, I noticed Google updated the Realtime API. It now has Undo/Redo support for local operations. To use it, I first added undo/redo buttons to the top-left corner of my window. You can now simple add an event listener to the Realtime model to receive events when undo/redo becomes available or not: model.addEventListener(gapi.drive.realtime.EventType.UNDO_REDO_STATE_CHANGED, HouseOfQuality._onUndoRedoStateChanged);. The implementation of the listener is listed below:

HouseOfQuality._onUndoRedoStateChanged = function(e) {
    $("#btn_undo").button({
        disabled: !e.canUndo
    });

    $("#btn_redo").button({
        disabled: !e.canRedo
    });
};

Now, to undo, I linked the undo button to this simple piece of code:

if (HouseOfQuality.model.canUndo) {
HouseOfQuality.model.undo();
}

It’s as easy as that! Though – in the end it wasn’t as easy as shown here. Previously, if a user edited a cell, I immediately changed the respective DOM element before the model was changed and therefore a model changed event was fired. In the change listener, I called the isLocal() method to check whether the changes were already performed on the DOM or not. For undo/redo I had to replace this kind of immediate response to user actions with the model change/event loop. However, the (local) delay can neither be measured nor perceived.

Another issue I had was with multiple undo/redo operations upon a single „user-level“ edit. For example, when I added a new user requirement row, I created the custom object, populated it with some default data and then added it to the document model. For every operation, this entailed several undo operations. To fix this, I had to use a custom object initializer which already removed the number of undo operations. Another fine detail in Google’s Realtime API is that already for creating a custom object, an undo operation is injected. To avoid this behaviour, I simply combined creating the custom object as well as adding it to the app’s data model in a compound operation:

// begin compound operation, otherwise this results in two undo/redo operations
HouseOfQuality.model.beginCompoundOperation();
var product = HouseOfQuality.createProduct(uuid, "Product");
HouseOfQuality._products.push(product);
HouseOfQuality.model.endCompoundOperation();

Finally, I added the House of Quality App to the Chrome Web Store. Feel free to try it out. :)

Still, one interesting question is left: How would one develop an app like Google Docs or Sheets with the Realtime API? For any hints, write me. :)


Turning an Excel Sheet into a Collaborative Web Application

Recently I faced the problem of using a diagram available as a protected Excel template file. I had to fill out more columns and rows as were provided in the template, but unfortunately the password for unprotecting was not available. My first idea was to import the file into Google Sheets, which normally automatically unprotects the file and makes it editable. However, the layout was broken after the import and could not be easily fixed due to some diagonal cells and other quirks of the original sheet.

Luckily around the same time, Google came out with its Google Drive Realtime API. Anyway, I was keen to try out my new skills in jQuery UI that I had just enquired some days before, building another prototype. Finally, developing a custom web app turned out to be the solution for the restrictions of the Excel file. Here I want to discuss some pitfalls I encountered while building the app.

The underlying template is the “House of Quality”, a diagram and methodology for creating products that translates the “Voice of the Customer” in terms of customer requirements into the “Voice of the Company” aka engineering terms. It is basically a matrix where on the left side, user requirements are listed that in one of the next steps are matched to concrete quantifiable requirements listed on the top. The latter are also evaluated in a correlation matrix (“the roof”) that estimates the expected influence of requirement A to B.

About Google Drive Realtime API

The Drive Realtime API by Google, launched on March 19th, 2013, is a client-side JavaScript API that allows to build near-realtime collaborative web applications that automatically save their data, distribute changes to contributors and resolve conflicts. Moreover, together with the Drive SDK, web applications can be integrated into Google Drive, a one-stop-shop for document management tools already known from Google Docs like interfaces for renaming the document or inviting collaborators.

The API itself comes as a JavaScript library that features collaborative versions of common objects together with an event-based model for receiving updates on both awareness events like user joins, and model events such as changes in a specific string object. There are three collaborative objects available, the CollaborativeString, CollaborativeList and CollaborativeMap. All three data types can be created using a factory that builds these objects and ensures that future updates, both local and remote, are saved on the server and propagated to all other contributors.

Furthermore, custom objects can be created by registering their type and members to the Drive API. Some pitfalls apply when it comes to initialize these custom objects. To ensure that initializers are only executed once in the (collaborative!) lifetime of the object (and not on each contributor’s model separately), initializer methods need to be registered:

// create Product class
HouseOfQuality.Product = function() {};

// register custom collaborative object 'Product'
gapi.drive.realtime.custom.registerType(HouseOfQuality.Product, "Product");
HouseOfQuality.Product.prototype.name = gapi.drive.realtime.custom.collaborativeField("name");

// register custom initializer
gapi.drive.realtime.custom.setInitializer(HouseOfQuality.Product, function(name) {
    this.name = name;
});

/**
 * ...
 * Creating document and obtaining data model...
 * ...
 */

// create instance
var product = model.create(HouseOfQuality.Product, "Apache Wave");

This steps (registering the custom object) have to be executed before creating a concrete document.

Now let me guide you through the steps that were needed to create a collaborative web application based on the Excel template.

Building the HTML5 Interface

In the first iteration, my goal was to create an interface based on HTML5, CSS and JavaScript that has all the functionality for editing the diagram. I used the MVC pattern, so the idea was that when wiring up the app to Google Drive, I should only on one hand replace the local model objects (arrays, custom objects) against their collaborative counterparts and the other listen to remote model changes in the controller to change the view accordingly.

At first try, I thought best practice would be to leave the table, tr and td elements aside and create a table layout using the CSS display attribute to have certain div elements take the role of table-row or table-cell. Unfortunately this turned out to be a dead end, as I did not find a cross-browser way to group certain sequences of table-cells together as I used to in plain HTML with colspan and rowspan. So in the end I changed all div elements back to their tr or td equivalent. However I kept the CSS so in the future this may work out.

For the editable cells, I decided for the Jeditable in-place editor, as it comes with some handy functionality like select boxes and callbacks for getting the result of an edit. Below is an example of how to make all cells with the CSS class hq_matrix_cell_custreq editable:

// make the customer requirements’ name editable
$(".hq_matrix_cell_custreq").editable(function(value, settings) {
    HouseOfQuality._onCellChangedUserRequirementName(value, settings, $(this));
    // return the entered text as output
    return(value);
},
        {
            type: "text",
            tooltip: "Click here to change the value",
            onblur: "submit"
        }
);

Here, a crucial benefit of using jQuery is apparent: To wire events to a certain element, we only have to query with the appropriate selector once to work on all respective elements. On the bottom part of this code, we tell Jeditable to use a “text” input field as in-place editor. The onblur preference key lets us define that the in-place-editor should submit the change when the input field becomes blurred again (thus replaced by a text node). In the upper part of the code, we define a callback function that is executed on submit. We basically pass on the arguments to another function while adding a reference to the element scope which is the element the in-place-editor was placed in.

The data model behind the interface is a simple array-based structure with custom objects for the rows aka customer requirements and columns aka engineering terms. First are related to the latter using JavaScript dictionaries. New objects get assigned a custom UUID that are also reflected in the interface using HTML5 data-attributes, to later identify the correct model value in the Jeditable callback.

The diagonal matrix at the „roof“ of the diagram was the roadblock when importing the Excel file to Google Drive. It was also a major challenge when it came to recreate it in HTML5; finally I ended up using a Canvas and some nice algorithms that both draw the diagonal lines and calculate the relative position of the input elements within the matrix.

To make the amount of rows and columns flexible, I added buttons on the top of the app that simply initialize new data objects and insert new elements into the HTML5 page.

Wiring the UI up to Google Drive

As last step, I wired the data model up to the Google Drive API. Therefore, I followed the advice of the “Writing your first Realtime API app” video by the Google Drive team that says developers should stick to the helper class provided in the Realtime API Quickstart tutorial. Indeed the realtime-client-utils.js does a great job in initializing the app, as it comes with ready-made functionalities for letting the user grant access to their account as well as creating a new file or reading out file-parameters out of the URL when starting your app from within Google Drive. It helps a lot for doing everything in the right order: it comes with callbacks for registering custom types, initializing the model and loading a file.

First, we need to register custom types. As mentioned above, I employ some custom objects that needed to be transformed to their collaborative counterpart. To make them collaborative, all you need to do for every property is calling the collaborativeField() method and the setInitializer() for any initializer. In my case, the initializer for the requirement’s type creates a simple collaborativeMap instead of the JavaScript dictionaries I used before; also the Jeditable callback functions were slightly changed to work on collections instead of pure JavaScript types.

My initializeModel() callback constructs collections for all of the app’s main member objects. Again, it is important to note that this method is only called once in the collaborative lifetime of the app, that is once, no matter how many users later connect to the same document. At this point, all changes made in the interface are already saved on the server in near-realtime!

To load the saved data when loading the data later, the onFileLoaded() callback takes care of reflecting the model state in the interface. It also adds listeners for model updates, which is a bit tricky when it comes to collaborative types within another custom collaborative object. For that case, listeners have to be appended recursively:

// add listener for the list of user requirements
HouseOfQuality._userRequirements.addEventListener(gapi.drive.realtime.EventType.VALUES_ADDED, HouseOfQuality._userReqValuesAdded);

// add listener for every user requirement
var userReqArray = HouseOfQuality._userRequirements.asArray();
for (var i=0; i<userReqArray.length; i++) {
    var userReq = userReqArray[i];
     userReq.addEventListener(gapi.drive.realtime.EventType.VALUE_CHANGED, HouseOfQuality._userReqValueChanged);
    // add listener for funcReq relationships
    userReq.relationships.addEventListener(gapi.drive.realtime.EventType.VALUE_CHANGED, (function(uuid) {
        return function(e) {
            HouseOfQuality._userReqRelationshipsValueChanged(e, uuid);
        };
    })(userReq.uuid));
}

First, a VALUES_ADDED listener is added to the main list of user requirements that gets called when a new user requirement is added. Then, a listener for VALUE_CHANGED events is appended to every list member. It is called on any changes on the user requirement object itself. Finally, a listener is added to the relationship map within each user requirement object.

Please note the last part of the code, where I return a function that simply puts the current parent object’s ID into the event envelope because of the specifics of JavaScript closures. There is no way to obtain the affiliation of collection’s members within a custom object to their parent object.

It is important to say, that these events get fired for all model changes, even the local ones. To account for the optimistic model behind the underlying Operational Transformation algorithm, UI updates should be immediately performed locally, not only after a server round-trip. That’s why the origin of incoming events can simply be checked with the isLocal flag that comes with every event. In my case, I simply ignore those updates.

Having said that, the collaborative web application is already ready-to-use. To make it more comfortable to load any diagrams created with my app, I also added the “Drive Integration” within the Google API Console. The only data required is an app icon as well as the URL of your app that gets appended with the document ID when clicking your file within Google Drive.

Conclusion

The Drive Realtime API makes it fairly easy to create collaborative near-realtime web application. For me, it took more time to build the interface than to wire it up to Google Drive. Some duties still remain for developers when transforming a single-user app to a multi-user collaborative app, like creating and using a data model that builds on the API’s collaborative types.

The biggest benefit I experienced was that the transformed app automatically comes with the save and load functionality of Google Drive. Even if you don’t go for it because of the collaborative features first, you get a neat integration to Google Drive that makes it easy for your users to come back to your application to load their files. In contrast to other collaborative editing frameworks like ShareJS, you don’t need your own server infrastructure (except for hosting your app files).

Still, the application I created is currently far from being a perfect collaborative application. For example, I did not include neither undo functionality nor a single awareness tool for letting the users know that others are also contributing to the document. This is a crucial, first of all because there are even event callbacks available in the Drive Realtime API that keep you in the know of other users’ edits, like means for saving different users’ cursor position. Anyway, it’s a great starting point and I would love to see a community project to create some awareness widgets for apps built on the Realtime API.

Feel free to try out my app. Let me know if any of my tricks helped you with your app!


Writing on a Screen… Reviewing the Galaxy Note II

Thanks to Samsung Germany and Mobile Business I got a late christmas present last year, to be honest a present only for three weeks: A Samsung Galaxy Note II for review.

I spare you the unboxing video…anyways, surprisingly my first impression was NOT „omg what a big phone“. Instead, I have to say that though I do not own a Galaxy S III personally, it somehow it made me accustomed to such screen sizes. However I can clearly see the difference in screen density compared to my Sony Xperia S (worse on the Note). The Note is incredibly fast, no wonder as it is powered by a QuadCore processor.

My first impression of the UI: Cluttered! It is full of information, then you scroll through the home screens and you get to see ads for movies on Samsung’s Video Hub and especially confusing ads for apps in real app-icon size that have not yet been downloaded from Samsung Apps. As soon as you detach the „S pen“, another set of interfaces is presented to you that seems to be a second set of home screens with icons to pen enabled apps. Again, pretty confusing! Finally, to conclude the enumeration of confusing stuff, I am still mixing up the back and menu buttons as on the Note, the back button is right while on my Sony, the back button is left (which though being absolutely reasonable is against the convention I heard).

But as soon as you overcame these obstacles you’ll love to use the pen…but stay tuned for part II of the review! :)


Some Changes…

With the new year came new challenges: I am now a Research Assistant at RWTH Aachen University, one of Europe’s outstanding universities not only in the field of computer science. I am part of the Advanced Community Information Systems (ACIS) group at the i5 chair and working in the field of Technology Enhanced Learning (TEL), learning new interesting things and continuing my passion for mobile and web technologies and especially XMPP.

I hope to stay connected to my research partners of the past including the PUC Rio (WWW is held in Rio this year!), Shizuoka University in Hamamatsu and of course my Alma Mater, the Technische Universität Dresden. Speaking of Dresden, please check this article just published in the New York Times about this great university!


Introducing The Clock and my new Facebook Page

Today, The Clock has hit SamsungApps. I hope you like it! For me this was a little exercise on how to achieve great effects, gradients and animations with CSS3. I was surprised at the capabilities of the webkit engine built into my Wave 3 device. It even supports the HTML5 slider input type, though this was a bit buggy and I had to implement it myself.

The Clock Widget

I also finished my new official Facebook Page. Make sure you like it! :)