Tags: Artikel mit dem Tag «lang:en» durchstöbern
The Nazareth Knot is an ancient drawing in found in a church in Bethlehem (duh!). It is really a nice byzantine knot that reminds us of the very similar keltic knots. Of course I wanted to recreate it. First of all, I took a photo of the original mosaic. Then I made a pen-and-paper sketch of it to get a better intuition of the workings of this knot. And finally I created an SVG image, which is what this blog post is all about.
I did some calculations in Ruby and generated the SVG using ERB code. I really enjoyed playing with Ruby's array methods like
repeated_permutation. If you don't know what those method do, I would reccomment to look at the documentation of the
Enumerable mixin, they are really useful. For developing I wrote a tiny script that renders the ERB into proper HTML, whenever the code changes using fswatch. It also displays errors on the page when something goes wrong. Happens to the best. Also I monkey-patched Ruby's Matrix class with some 2D affine transformation-foo for all the calculations. Also, I used copies of the paths and the
stroke-dasharray CSS property to create the interweaved strings, as you can see on the web page.
The Ravello Knots are based on a mosaic found in a church in Ravello, Italy. The city is one of the most beautiful towns along the Amalfi Coast, mostly for its wonderful gardens. I found the knot a long time ago while travelling to Italy, I already „harvested“ this knot then, since I created a computer graphic of the original knot and later did a drawing of a slight modification to my sketchbook.
Since I wanted to continue exploring the SVG creations more I was looking for old sketches to refresh and the knot from Ravello came to my mind. I not only did a nice SVG drawing of the old sketch but took it further to create a totally new variation showing off the
stroke-dasharray technique in a most spectacular way.
You can find all the code in the Git-Hub-Repository bxt/Nazareth-Knot.
More than six years ago I concluded a blog post with the bold statement „I will start using subversion for everything... soon“. It turns out I couldn't have been more wrong. What has happened in the meantime?
Firstly, most of the Subversion servers I had been using have since stopped working or I didn't bother maintaining them anymore. SourceForge has gone rogue and Google pulled the plug on its Google Code platform. Instead, it feels like everyone is using GitHub and GitLab. Guess why they have Git in their names? Right. And running your own servers for tiny little private projects just accumulates way too much work over the years.
So is keeping track of repository locations. It's very convenient to have one hidden
.git directory following your code everywhere when dealing with suspended projects. When I move code from machine to machine, from OS to OS, onto portable dives and back, the repository and all the history is always sitting there right with the code. Some of my repositories for small private projects never even hit „the cloud“. Additionally, the Git storage is backwards compatible, so it's no problem to dive back into ancient repos. As it turns out, Git is really good for archiving.
But the most prevalent reason why I use Git is probably the branching and merging. In 2011 I worked for the first time in a larger team on a project using SVN. Not only did SVN often crash, it also made it really hard to do basic branching and merging. As a result, the code could only really be checked into the master branch (remember „trunk“?) with devastating results: Crucial steps like CI runs and code review could only happen after the code was already in master and therefor maybe even in production (Jeez!) or at least distributed to other developers breaking their builds. Nowadays I just open a merge request and It is only ever merged when everything works. And even then it's only in the development branch, undergoing further testing. Git does branching well and branches are essential for today's developer's workflows.
There's a nice project I'm working on. It's the bavarian film festival for student's movies. I like the idea behind this festival, because it enables pupils to show their movies to a broad audience on a cinema screen and win great prices. So once a year, I touch the code to slightly adjust the design a make a few fixes. Naturally I need a solution that works across the other ~361 days without much maintenance. It's easy to see why SVN with it servers, detached repositories, incompatibilities and bugs is not my go-to solution here.
Back in 2010, Julian Schrader, who is now my boss at Sophisticates GmbH, already suggested using Git in the comments of my blog post. And now, after six years, I use Git almost exclusively. So yeah: You win, Git.
If you're a git power user you might enjoy my script for deleting merged branches.
Sometimes you want to start using Subversion code control with an existing project. This tutorial explains the steps required to create a repository and add the files of the existing project.
Start by creating a repository:
svnadmin create /dir/to/store/repo/repository-name
This will create a directory „repository-name“ containing the database of files and revisions.
Then let's add some internal directories to our repository:
svn mkdir file:///dir/to/store/repo/repository-name/trunk \ -m "Creating repo directory structure"
Now that you have created the repo-directory that should contain all the files, import the existing project files. Start by checking out the empty repo-dir to into your project dir. This will make your project dir a working copy (that is: create a dir named „.svn“ containig some internal info) but won't change anything else.
cd /existing/poject/dir svn checkout file:///dir/to/store/repo/repository-name/trunk .
Then go on adding (or planing to add) all the files:
svn add *
This command will list all the files that will be loaded into the repository. You can always look at the planned changes with
You probably want to exclude some files such as configuration files, runtime data. You just have to revert the add-command again:
svn revert runtimedata # exclude whole dir svn revert config/my.ini # exclude single file
Then apply the changes to your repository (commit it):
svn commit \ -m "initial import"
And that's it. You project is added to the repository without extra files. That can be crucial if your runtime data has a huge file size. If you're paranoid or just curious check the repository for success:
svnlook tree /dir/to/store/repo/repository-name
This should show the imported directory tree. Now you can start making changes to your files and commit them as usual.
vim app.cpp svn commit -m "typo" svn log -r 1:HEAD #show full revision history
If you ever got stuck, you may use the built in help:
svn help #list commands svn help commit #list options svnlook help tree #works too
I will start using subversion for everything... soon.
Everyone is writing about twitter now. Everyone is thinking he's missing things going on at twitter. Newspapers report about eyewitnesses tweeting things. Twitters user count and press representation
is was rapidly growing. So what's it all about? Essentially, it's about a microblogging service that started in 2006 as a small project of Biz Stone and Evan Williams, who wanted their colleagues to answer the simple question "What are you doing?".
I have been a member of Twitter since March 17, 2007 and have tweeted 439 times since then until now (actually not too many updates). Anyway, I noticed a change in how people use twitter. This usage history resulted in a rich variety of uses of twitter. Here are some behaviours I collected over the years:
There are some users that nearly only answer THE question when it comes to twitter. You are likely to find mostly tweets like "@having breakfast" or "preparing lunch" in their profiles.
It's really funny to follow one of those What-Are-You-Doing-Guys and then meet them. You won't have anything to say, because you do already know (nearly) everything about your fellow tweople.
Also you do really have a log about all the small things you did in life. This might be very interesting some years later.
These tweople that only use twitter like a chat are a bit incompatible to the others. They have evolved in the SMS times, when twitter was THE way to text your friends. You might find many senseless tweets like "ok pals I'm off" or "@yomama sure".
Some people seem to not want to tell everyone what they are doing and don't have too many friends on twitter, so they don't really have to use twitter. But everyone does, so do they. This is why they seem to focus on (bad?) jokes, proverbs and short quotes.
The NewsfeedorzWhen you have found a stream with only headlines and links or 6 of 7 tweets starting like "new blog post:" you know you have found a Newsfeedor. They use twitter only for posting "news". There are famous ones like CNN and rather not too famous ones. And of course many advertisers have found a new channel at twitter. The very bad thing about them: It's usually not original content and it's most of the times better available through RSS.
This group is a bit underrepresented. Some of them don't even have a twitter account. They are reading through someones profile (subscribing their stream as RSS) or using one of the services that aggregates twitter messages, like delicious.com. Or they use twitter as a real-time opinion-of-the-tweeting-world search engine. I think the twitter makers had a good reason to change their homepage to a mere search page.
The Retweeters and Answerers
A phenomenon at twitter is retweeting. If you want to pull the attention of your readers to a statement of someone else you retweet that (you just tweet it again putting RT @name in front of it). Or you tell everyone your opinion about it (like: opinion (via @name)) or as direct answer (@name blaaa). Now some tweople only do this. If you look through their stream you will find dozens of answers and you don't get what it's all about. This is a true Retweeter/Answerer.
This is by definition a very popular behaviour: Commenting on the trending topics on twitter. Some "answers" are really funny - and of course it's a cool way to share personal experiences. It's a kind of very fast global FB. Just don't overdo it. (And don't just write "#iamsinglebecause it's trendy". )
So what's correct?As always - nothing. So what do I do? Simple: Mix. I retweet what I like or want others to know, I answer questions, I keep a log of some things I did, I stay informed about my friends, and I post stuff I put into the cloud.
You can always find new scenarios where you can use a 140-char-messages-posting-page. For example I have started to collect new/strange/biased/funny (German) words. And sometimes I post cryptic messages. For example this one related to the sense of life, a movie, Alice in Wonderland and the time I arrived at school that day.
And because it's always 140-character limited you can display your twitter status e.g. on your website. I do that through my lifestream. My homeserver that serves as a very neat clock too also shows my twitter status to my family.
A huge part of web applications is usually the interaction with the SQL database. This is why I want as little work as possible connecting, escaping values, getting the right tables an so on in PHP. But it should stay simple and allow modular approaches. Therefor I'm using some nested APIs for doing queries easily:
The very fist thing I am using is PDO. It can handle many RDBMS, but I am most of the times using MySQL or SQLite. By using PDO as an API for the following layers I can make sure most of the code will work for many RDBMSs. PDO even simplifies transactions and prepared statements. Here's some sample PHP code using PDO:
$pdo=new PDO('mysql:host='.$host.';dbname='.$db, $user, $passwort); $pdo->exec('UPDATE test SET foo="bar" WHERE id=4'); $satement=$pdo->prepare('SELECT * FROM blogeintraege WHERE id=:id'); $satement->bindValue(':id',3,PDO::PARAM_INT); $satement->execute(); $data=$satement->fetchAll();
The next layer is a class that will hold a MySQL database Connection (a PDO Object) and offer some simple functions for doing e.g. a simple prepared statement. Instead of binding each values manually, you can throw an array in.
It also includes a cache, if you want to run statements more than once. It can append a prefix to all queried tables and checks dynamically inserted tables for validity to avoid SQL-injections and MySQL errors. It is used like that:
$res=$db->sql("SELECT * FROM blogeintraege"); $res=$db->sql( "SELECT * FROM #test WHERE id=:id", array('id'=>$id),array('id'=>PDO::PARAM_INT), array('test'=>'blogeintraege'), array('limits'=>array(0,$l),'buffered'=>false) );
For one array element this does not look too tiny, but the more values are bound, the more useful it gets. And it is very useful if you already have your values in an array, like
Note that nearly everything is optional. The table array can contain more tables, for example you can have an array of tables for different languages, if they are in different tables. The bind-types don't need to be specified too. You can even leave out everything except the query as shown in the fist line of code. The Result will by default be returned as a nice array (the GROUP_CONCAT fields are array'ed too) but you can use all other PDO fetch types.
This layer follows a rather functional approve, so I needed another layer for accessing the central
sql()-Function in an OOP manner. This should avoid some runtime errors and you can modify the SQL in a modular system.
So I created a wrapper object, that holds a pointer to the database and will construct the parameters for
sql(). This comes in handy as more and more optional parameters are added.
The PDO Simplifier has a method to build such statement-objects called
sqlO(). This is how the wrapper is used:
$db->sqlO('INSERT INTO blogtaglinks SET ##,type=3') ->setSet(array('ID_tag'=>$lasttagid,'ID_entry'=>$id)) ->exec(); $res=$db->sqlO("SELECT * FROM #test WHERE id=:id") ->setData(array('id'=>$id)) ->setDataTypes(array('id'=>PDO::PARAM_INT)) ->setTables('test'=>'blogeintraege'), ->setLimits(0,$l) ->setBuffered(false) ->exec(); );
As you can see, it is a little more code, but the code is pretty self-explanatory and now one can build the sets and the other parameters as arrays and then include them easily in the statements.
A bit different: Zend Framework's
A next step would be to build queries with a single API. This is a feature implemented by the Zend Framework, where you can build your SQL with some API functions and it will even work across various databases:
select = $db->select() ->from('blogeintraege',array('id','Titel')) ->where('id < ?', $id) ->order('id DESC') ->limit(0,10);
Well doesn't that look nice?