L'articolo Preserving a Cryptography book from 1897 sembra essere il primo su Davide Aversa.

]]>The book title is **“Crittografia ossia l’arte di cifrare e decifrare le corrispondenze segrete”** of **Count Luigi Gioppi of Turkheim**.

Well, I don’t know if this book is hard to find. In any way, I decided to take a photo of every page to share and preserve this book. The book is very old and some page is ruined, sorry if the quality of the photo is not optimal. I’ve done my best for not destroying the book in the process. :)

I will release the photo one chapter at the time. Keep this page in the bookmark.

Tempo fa ho trovato a casa di mia nonna un vecchio libro di crittografia risalente al 1897. Perché un libro di 110 anni sulla crittografia fosse a casa di mia nonna, rimarrà un mistero. Comunque.

Il libro si intitola **“Crittografia ossia l’arte di cifrare e decifrare le corrispondenze segrete”** del **Conte Luigi Gioppi di Turkheim**.

Non so quanto il libro sia difficile da trovare. Nel dubbio, ho deciso di fare una foto ad ogni pagina del libro per condividerlo e preservarlo il più possibile. Il libro è molto vecchio e qualche pagina è malridotta, scusate se alcune foto sono storte o altro. Ho cercato di fare il possibile senza distruggere il libro nel processo.

In questa pagina aggiungerò un link alle foto un capitolo alla volta. Tenete la pagina nei segnalibri per rimanere aggiornati! :)

- Cover + Chapter 1
- Chapter 2

L'articolo Preserving a Cryptography book from 1897 sembra essere il primo su Davide Aversa.

]]>L'articolo NaNoWriMo 2017 in Stats sembra essere il primo su Davide Aversa.

]]>I think it is the proper way to do it: I need to talk about the novel. The novel is, of course, in Italian and it is unfinished. With 52k words I reached barely the beginning of the third act, more or less. Many things need to be rewritten, characters disappeared in thin air as soon as I discovered that I don't need them… stuff like that. The usual way to do a first draft.

The story is about a spaceship collapsing in a gravitational singularity near Triton. The engineer of the ship survives and lands on a planet that seems like a cheap uninspired fantasy book. The main conflict is about the character trying come back to earth and dealing with its "technology" in the scheme of medieval fantasy world.

**Day with most words:** November 26th with 3,213 words.

**Day with fewest words:** November 28th with 155 words.

**Global Average:** 1739

**Day I reached 50k:** November 27th.

**Best paragraph (so far):** (it is in Italian but translate roughly like this)

"You are a powerful magician! You need to help us! I've seen what you can do with your scepter!"

"I am not a magician! I told you this a lot of times!"

"So… what are you?"

"I am an electrical engineer, for God's sake!"

"It is a kind of magician?"

"Kinda…."

It was a fun experience after all. More importantly, I proved myself that I can do it. I can survive 50k words in a month without any major issue. And if I can do this in November, I can do it any other month.

L'articolo NaNoWriMo 2017 in Stats sembra essere il primo su Davide Aversa.

]]>L'articolo Procedural Calendar Generation – Lunar Phases sembra essere il primo su Davide Aversa.

]]>Said that, here we go with the next part: **lunar phases**.

Lunar phases are incredibly important in a calendar. So important, that many of the early humanity calendars are, in fact, lunar calendars or lunisolar calendars. Of course, this is true if and only if the planet is lucky enough to have a big moon like us.

In any case, moon is how we define months. A calendar generator must include the moon period into account to have months (we can also include different criteria for months generation, but this is a more advanced feature).

Once we introduce an important moon, then, we are automatically adding lunar phases into the mix. In our world, we have so many “lore” attached to the concept of lunar phases that it is impossible that another planet with a principal satellite would ignore the feature.

So we need to compute the lunar phases and put them into our calendar.

Many people think that lunar phases are caused by the shadow of the earth on the Moon. This is wrong. Lunar phases are just caused by the Sun lighting the Moon from a different angle every night.

A lunar phase is *how much surface of the satellite I can see from the planet.* Because the moon takes a month to do a full revolution around the Earth/planet, during the month we will see different “percentages” of the satellite under the light. This is shown in the picture above.

For lunar phases we need to focus on the satellite-moon system. We assume that satellite-planet-star all orbit on the same plane. This is false (otherwise we would have a solar eclipse every month), but it is enough for lunar phases computation.

Then, we assume that light comes in a fixes and parallel direction. This is also not true, after all, the planet is moving during that month, but it is still a good approximation.

Finally, we have everything we need:

- We call
*lunar phase*the percentage of the side of the satellite exposed to the star that we can see from the planet. As you can see from the image above, this is the angle between the line orthogonal to the line connecting the planet and the moon (the light blue line), and the line orthogonal to the line connecting the star to the moon (the orange line). - The orange line is fixed in our approximation (that is true if planet revolution is much larger than moon revolution). The blue line, instead depends on the
*true anomaly of the satellite around the planet*. Not only, the difference between the orange and the blue line is exactly the true anomaly! - What we want now it is the ratio between the true anomaly and the 180° angle: . This is the “progress” of moon phase. It goes from 0 (new moon), to 0.5 (first quarter), to 1 (full moon) and back to zero.
- Remember, we know how to compute the true anomaly! It is the same we used for the orbit of the planet!

This is a very simplified model but it is enough to add the concept of moon phases to our calendar.

Now we have months, moon phases, seasons and general properties of the calendar. Next step is to start adding some cultural aspect on it.

L'articolo Procedural Calendar Generation – Lunar Phases sembra essere il primo su Davide Aversa.

]]>L'articolo Seasons Generation from Orbital Parameters sembra essere il primo su Davide Aversa.

]]>In this part, instead, we will tackle a fascinating consequence of the cosmic dance of our planet around its sun: seasons. Seasons are a strange beast because their behavior depends on a huge amount of factors. We are used to our four seasons with mild springs and autumns, hot summers and cold winters. But these four season are just the consequence of our planet ecosystem, atmosphere, the peculiar **axial tilt**, if the orbit is particularly eccentric, distance from the sun in different period of the year can be a strong modifier too! In multi-star system, we can have more than 4 seasons, in planets with strange mechanics we can not have seasons at all (or better, the “season” depends on where are you on the planet).

But I want to keep thing simple for now. We can safely assume that season are just a way to represent day duration on the northern hemisphere. We say that “spring” and “autumn” *start* on the **equinoxes**, that is, when the duration of night and sun is the same, and continue until the **solstices, **that is, the “longest day” (summer start here) or “longest night” (winter starts here).

Note that we are still assuming that there is a “day” and a “night”. That the planet is not tidal locked to the star. Because otherwise, it is hard to associate a “season” with any kind of climate change. Do you see how many assumptions the concept “season” requires?

Anyway, we call our definition of season **astronomical season**. The advantage of this definition is that seasons are marked with deterministic properties of the position of the planet around the orbit and, thus, we can compute them. And for our pleasure, we already have orbital information. We only miss another parameter: the axial tilt.

Before we go further, let’s explore this remaining parameter.

As we know from school (hopefully), Earth seasons are caused by the inclination of the Earth rotational axis. This value is called **axial tilt** and it is defined as the angle between the **orbital plane** and the **equatorial plane**.

The **orbital plane** (also known as **ecliptic plane**) is the plane in which we find the orbit. This is very easy in our case. Because we consider the Sun the origin of our system and because we are taking into account only one planet, the orbital plane is simply the *xy *plane .

The **equatorial plane,** instead, is defined as the plane cutting the planet in half passing through the equator. The equatorial plane normal is exactly the rotational axis, so we can define it as some vector .

They are a bit hard to visualize those planes in a 2D image, but you can do a better job by placing a piece of paper on your desk and tilting it with the condition of keeping a corner on the desk. The desk is the orbital or ecliptic plane. The piece of paper is the equatorial plane cutting in half an imaginary point-sized planet orbiting on your desk. You will notice that there are “two angles” you can move the piece of paper without moving the corner.

We can easily compute the axial tilt as:

In our case:

Axial tilt only is not enough to compute the season. The axial tilt only how much different is a season from the others. Intuitively, we can assume three cases.

If is very small, the differences between the seasons are very small. If is exactly zero, there is no season at all! All the days are the same, and we can assume the clime will remain the same for every day of the year.

The same is true if axial tilt is close to 180°. Yes, the planet will spin “backward”, but the rotation axis is still closely perpendicular to the orbital plane. This is the case of Venus, with its 177° axial tilt.

Earth axial tilt is, on average, 23°. If a planet is around this value, we can expect the same amount of differences between season that we see here on earth.

In this situation, seasons are **extreme **(if it is possible to talk of “seasons”). During the summer solstice the north pole will face directly the sun. The planet rotation will not affect the “day” and northern locations will be constantly in scorching daylight. During the winter solstice, the opposite is true and northern locations will spend their days in total freezing darkness. During the fall and spring equinoxes, instead, the days depend only on rotation speed and the situation is “normal”.

And obviously we have all the transitions between these states.

In our solar system, this is the case of **Uranus**. If you are interested in a longer detailed description of such extreme season configuration of Uranus, this is a beautiful article.

As we have said, axial tilt is only valuable to describe the “intensity” of seasonal variations. But it says **nothing** on when the seasons occur, when there is an equinox or a solstice. For this, we need the other two parameters in the rotation axis vector: *a *and *b*.

We can do better, let’s rewrite this vector as .

In this way we made this vector a normal vector and, moreover, we explained the relation the axial tilt between and the rotation axis of the planet. And we reduced the number of parameters from 3 to 2.

But what is ? To understand this we need to look at the intersection between the orbital and equatorial plane (we assume both planes passing through the origin for simplicity, the same apply to the equatorial plane passing through the planet position, of course)

If the axial til is not zero, we can remove everywhere.

that is

So, if we look at the line obtained by this intersection, is angle of this line with respect to the orbit major axis.

Why we need this? Because to compute the equinoxes, we just need to find the intersection of this line with the orbit. To find solstices, rotate the line by 90° and look for the intersection with the orbit again.

What we have done is easy: given some physical characteristic of our orbit, we have inferred some astronomical events that can influence the seasons of the planet. In particular, orbital parameters plus the new two parameters axial tilt and have impact on:

- How strong seasons are. Note however that this depends on the location on the planet too!
- How long a season is.
- When a certain season begins on the year (assuming, as we are, that “day one” on the calendar is the periastra).
- How different seasons are in terms of duration: very eccentric orbits can have a season much shorter than another.
- And more.

Later, we will see how we can use this information to derive some meaningful names for the calendar (e.g., saying that a certain month is the “hot month” because it is in the middle of the summer), or events and festivities for our population. But this is a story for another time.

L'articolo Seasons Generation from Orbital Parameters sembra essere il primo su Davide Aversa.

]]>L'articolo Not every classification error is the same sembra essere il primo su Davide Aversa.

]]>But let’s start from the beginning. Imagine a simple binary classifier. It takes some input and return a Boolean value telling use if belongs to a certain class or not. When we pass through the algorithm a number of elements, we can identify only 4 possible outcomes:

- –
**True Positive**– The algorithm correctly classifies as a member of . - –
**True Negative**– The algorithm correctly classifies as a**not a**member of . - –
**False Positive**– The algorithm mistakenly classifies as a member of . That is, it says that belongs to while this is not true. - –
**False Negative**– The algorithm mistakenly classifies as a**not a**member of . That is, it says that does not belong to while this is actually true.

and are the “correct” answers. They represent when the algorithm assigns the element to the right class. Intuitively, we want these two outcomes to represent the majority of our classification outcomes.

On the other hand and $F_N$ are the “wrong answers” (also called *Type I and Type II *errors). With the same reasoning, we want the number of this outcomes as close to zero as possible.

To represent our first naïve evaluation of the algorithm, we can produce a “score” function encoding the above reasoning.

This score function is 1 when both and are zero, signaling that our classification is perfect (at least in our testing set). It also go to zero when represents the full set of outcomes, signaling that our classification algorithm classifies exactly everything wrong.

*Note1: a binary classification algorithm that does everything wrong is actually an amazing classifier! You just have to consider the opposite of what it says and you get a perfect classifier!*

*Note2: Note that, by definition, must be equal to the full set of outcomes. That’s also the reason we can ignore .*

Now, you may be satisfied with the result. This seems a good score. But, in reality, it is an awful score for many applications. Not only, by using it for evaluation (and thus tuning and training) will produce an algorithm that it is potentially dangerous.

I’ll let you think “why” for a couple of minutes.

…

Look at the errors. The above score function considers and equally important. It doesn’t matter if an algorithm commits only false positives or only false negatives. For this score, they are performing the same. This is often bad. To convince yourself, here a couple of examples:

- Imagine you are writing a machine learning powered algorithm for bridges and infrastructure maintenance. The goal is to optimize the government resources and increasing the safety of people by identifying “collapsing” bridges in advance, so that you can send a specialized team to the bridge.

To do that, the algorithm takes as input a bunch of sensor’s inputs (such as vibration frequency, camera images of connections and pillars, and so on) and produce a binary answer to the question: is the bridge collapsing or not?

Consider now the effect of false positives or false negatives. To have a**false positive**means that the algorithm classifies as “collapsing” a bridge that it is actually fine. For this reason, you send a squad to the location and the squad discovers that the bridge is fine. At worst, you have wasted a day of work for your team.

Consider now a**false negative**. This means that the algorithm classifies a “collapsing” bridge as “safe”. In this case, you do nothing. But the bridge is collapsing so, some weeks later it actually collapses. The result are a lot of damage and probably deaths. - Imagine now you are writing a medical application to diagnose a deadly infectious disease (I don’t know, Ebola or some other plague). The algorithm takes as input a bunch of medical exams and returns the binary answer to the question “is this person infected?”.

Consider again the effect of false positives and false negatives. A**false positive**means that the algorithm classifies as “infected” a sane guy. You start more accurate exam, you put the guy in quarantine and continue with your protocol. After a week of additional exams, you discover that this guy is actually fine. The consequences? You probably gave to this guy a rough week, but other than that is a good ending.

Consider now a**false negative**. Your algorithm classifies an infected person as “sane”. So you do nothing. The person goes home, infecting other people and reducing his/her chance of survival because of late intervention. I think you will agree that it is a much worse scenario.

There is in fact a **precautionary principle **that must be taken into account. In the above examples a false positive is always better than a false negative, but this is not the point. If it is better a false negative or false positive depends on the scenario and on the question we are asking to the algorithm (try to reverse the class of example 1 from “the class of collapsing bridges” to “the class of solid bridges”).

The point is **we should consider much more the ****error whose leads to reversible actions**. The famous **better safe than sorry**. In the case of the bridge is indubitably better to check a safe bridge than ignoring a collapsing one. In the virus example, it is indubitably better to be more scrupulous on a sane patient we think may be infected than ignoring a really infected one.

In my opinion there are very few scenarios where both kind of classification errors are equally bad. In real life, there is always a situation that is marginally better than the other.

So, how do we do it? Stop considering the F-score a good enough score. Start thinking about the consequence of decisions based on your algorithm outcome. A better approach, is to use the score.

With you are increasing the impact of false positives. With you are increasing the impact of false negatives. Common choices are and , but, in the end, it is your call.

L'articolo Not every classification error is the same sembra essere il primo su Davide Aversa.

]]>L'articolo Cuphead is not “hard” sembra essere il primo su Davide Aversa.

]]>
,

Because of the indubitably charming look and feel of the game, the game attracted players that would have never played such type of game. Side scrolling shooter and *Shoot ’em up* were a quite common game genre back then. In the last 10 or more years, instead, the number of very successful side-scrolling shooters is close to zero. This is not a common game category anymore.

Then Cuphead came with a killer feature: an amazing graphic. People who played other games, casual gamers or even people who do not play at all, are getting hooked. They want to play the experience of such a charming game. And with their surprise they discovered that side-scrolling shooters are a quite challenging kind of games. Even if Cuphead is a totally average side-scrolling shooter in terms of difficulty (probably leaning on the easier side), people are just surprised of the intrinsic difficulty of this genre they are not used to.

For this reason, **Cuphead is not hard, is the game genre it belongs that is more challenging than the average modern game.**

Cuphead tutorial is a good tutorial. It is the kind of tutorial the industry did 20 years ago. It teaches you the game with the game itself. Therefore, in the first real Cuphead level you are thrown in the action in a relatively safe environment and you need to figure out how to use the three skill you have. In the first level you need to do all: jump, duck, dash and, obviously, shoot.

But this is not what we re used anymore. We are spoiled by the *“press X to jump window”* tutorial type. Annoying tutorials that hold you by hand for at least an hour before you are allowed to fail.

Moreover, the average difficulty of games is going down. Some challenging aspect are washed away by better game design. This is good, because the removed challenges are not the “fun ones”. But another big chink of challenges is just removed to appeal a wider audience. This is not always a bad thing. I also enjoy this kind of “interactive experience” that can be played in a continuous stream, like an interactive film or book.

But the more the market share for this kind of streamlined games get bigger, the more the games that do not follow this trend stand out as “hard”.

For this reason, **Cuphead is not hard, is just a more traditional game.**

The only aspect that Cuphead and Dark Souls have in common is the “learn the pattern of the boss and *git gud*“. But this is not enough to make Cuphead a “souls game”. Learning the boss patterns in order to defeat it is one probably a basic mechanic of 90% of the games.

Dark Souls is probably one of my favorite game ever, and it is definitely one of that rare case in which a single game can define a new genre by itself. But the “Dark Souls” genre is not defined by its hardness. Sure, this is one of the thing everybody like to say every time and, sure, Dark Souls is a game that mastered the “learn the pattern of the boss and *git gud*“, but it is not enough. A “Dark Souls”-like game should include also exploration and **hard punishment for frequent failures**. Both things that Cuphead not even thought about.

For this reason, **Cuphead is not hard, but it is not “Dark Souls” either.**

L'articolo Cuphead is not “hard” sembra essere il primo su Davide Aversa.

]]>L'articolo A Dwarf Fortress calendar in PureScript + Halogen sembra essere il primo su Davide Aversa.

]]>To test PureScript I decided to implement a very simple project: a page showing today’s date according the calendar used in Dwarf Fortress. It is easy enough to be tackled without me knowing nothing about PureScript and Halogen in a week: you take today’s date, you apply some math, and you print your result on an HTML widget. At the same time, I think it is complex enough to have a grasp of PureScript potential (at least, in the allocated time).

You can find the result here. (Github Repository) Now, we can go on.

This is the least interesting part, but should be covered. The algorithm is outlined in the following steps

- Take today’s date.
- Find the amount of days from the first day of spring (21st March) of 2006 (Dwarf Fortress release year).
- Scale the day duration to map Dwarf Fortress year.
- Use this number to compute the date in the new calendar.

Point 3 is required because a Dwarf Fortress year is composed by 12 months of exactly 28 days. So, in total, a Dwarf Fortress year is composed by 336 days. This will quickly make seasons out-of-phase with our Human Seasons, therefore I scaled down a Dwarf Fortress day to be (365/336) = 8% longer than our day in order to make the year sync.

For a similar reason, I choose to make the new year start on 21st March because in the Dwarf Fortress world the year starts in spring (and I live in the northern hemisphere, so…).

PureScript can be installed easily with `npm`

:

npm install -g purescript

This is not enough. You also need `pulp`

, the PureScript build tool, and `bower`

, the standard front-end package manager.

npm install -g pulp bower

To create an empty project, create a folder and use the command `pulp init`

.

PureScript is Haskell for the web. It is very similar to Haskell, but it is not. It has several small but important differences. Some of them are improvements due to the fact that PureScript can simply drop many “wrong” legacy decisions of Haskell. Some of them are different design choices given that PureScript must output JavaScript code as clean as possible. Other are just missing features.

As a Haskell user, I quickly feel comfortable with the language. However, you need to pay attentions to some important difference between Haskell and PureScript:

- First,
**PureScript is not lazy.**It is strictly evaluated so that it is possible to avoid complex runtime in the JS output. - PureScript requires an explicit
`forall`

in polymorphic type/function declaration. For instancelength :: [a] -> Int

is valid in Haskell. However, in PureScript this must be declared aslength :: forall a. Array a -> Int

. - There is no
`derive`

functionality.**(UPDATE 2nd Oct)**I dug a bit more on this topic. PureScript does not support the “classic” Haskell`deriving`

but a`StandaloneDeriving`

-like functionality (in the form of`derive instance`

for every type class we want to derive). This can be enabled in Haskell to using language extensions. This is more verbose, but it is a good thing. More info here. - Function composition is represented by
`<<<`

instead of`.`

. This was a big source of confusion for me because using the`.`

is not a syntax error (but it tries to do something different from composition). Why is that? Because composition can be done right-to-left or left-to-right and the`.`

operator is ambiguous on which kind of composition we are applying.

Another important difference is that the Haskell’s `IO`

monad is replaced by the `Eff`

monad. This has the same function of `IO`

but it can be mad more granular. That is, while `IO`

can be used for *any* non-pure I/O operation, the `Eff`

monad can be defined for different kinds of side-effects: console interaction, logging, databases, random values, and so on.

For instance a type `Eff (fs :: FS, trace :: Trace, process :: Process | e)`

can be found on functions that access the file system, trace something on the console and get data from the current process (in Node.js). While `Eff (fs :: FS)`

works in function that **only** access the file system.

This makes everything more confusing and complicated, especially for people who already have problems with the concept of `IO`

monad.

There are more differences, of course, but these are the main one.

After I become confident with these changes, my Haskell motor started working on PureScript code and writing the algorithm went smooth.

Writing the user interface has been the hardest part. Even if my interface consists only in a computed text string, the lack in UI tools documentation made everything more complex. I started with Flare. This is an amazing powerful library, but very crippled in documentation and community support. So, after struggling to find a suitable example for a widget without inputs controls, I gave up.

Instead, I chose a more “stable” library: Halogen. This is a React.js-like UI library for PureScript. Its main advantage is that it is old enough to have some questions answered on StackOverflow and a more solid documentation. It is not perfect, but, at least it covers the building of a simple projects from scratch with detailed and step-by-step examples.

I fought with some monadic aspects of Halogen, but, in the end, I came up with 36 lines of code for the display widget. This seems to me a good amount of lines for such a simple component.

type State = ArmokDate data Query a = Regenerate a data Input a = Unit ui :: forall eff. H.Component HH.HTML Query Unit Void (Aff (now :: NOW | eff)) ui = H.lifecycleComponent { initialState: const initialState , render , eval , receiver: const Nothing , initializer: Just (H.action Regenerate) , finalizer: Just (H.action Regenerate) } where initialState :: State initialState = { day: 1, month: Granite, year: 0} render :: State -> H.ComponentHTML Query render state = let string = showArmokDate state in HH.div_ $ [ HH.h1_ [ HH.text string ] ] eval :: Query ~> H.ComponentDSL State Query Void (Aff (now :: NOW | eff)) eval = case _ of Regenerate a -> do date <- (H.liftEff nowDate) H.put (convertLocal date) pure a

Finally, we can compile the project to JavaScript with the command

pulp build -O --to dist/app.js.

**The result is an uncompressed ****341.94KB file (125.87KB after minimization).** This is not too bad considering that we are embedding the PureScript runtime **(*)**, a couple of libraries (for datetime manipulation) and, in practice, a *React.js* pure-script clone. For this project is too much, of course, what’s matter is that the size overhead is good and it will remain stable.

Output JavaScript is verbose and difficult to parse for human being (even if it uses descriptive names). Therefore, I will discard the option “if I want to stop using PureScript I can simply start working on the app.js code”. This approach works in TypeScript, not in PureScript.

Other than that, the output `app.js`

file is completely self-contained. It does not require any specific procedure to make it work. You just have to include it.

**(*)** **UPDATE 2nd Oct:** People on Twitter told me that PureScript does not produce a “runtime”. That’s true, it is my fault. I used “runtime” very loosely there. With “runtime”, I referred to a kind of “purescript standard library overhead”. In the generated code you find a lot of code from `Prelude`

or `Control.Monad.*`

modules. This is not technically a runtime, and it is not a “bad thing” by itself. But it is something “more” respect to a plain JavaScript script with equivalent effects.

**PureScript is a very promising language for front-end web development.** It is a perfect choice if you want 90% of Haskell’s capabilities but with a stable compiler (GHCJS, the official Haskell to JS compiler, is still very unstable). In some parts, PureScript is even better than Hakell itself. But watch out for the differences! I spent more time that I’d like to admit on trying to understand why something that I used on Haskell didn’t work on PureScript.

However, I cannot say I am in love with PureScript, yet. **Main problem for me is in the documentation.** This is a problem for Haskell too. Documentation needs more examples, more small snippets of code, more explanations. For this project, I’ve found myself mentally solving algebraic types equations to guess which function I needed to do something. While the fact that I can do this is one of the biggest advantages of Haskell/PureScript type systems, this doesn’t mean that we can neglect a detailed documentation! I can figure out myself the right function by combining types in my mind, but I will be much happier if that was explained in the documentation somewhere!

Moreover, because the language is so young, it is very hard to find previous developers question answered. So, **if you find a hard problem with some library, you are mostly on your own**. **(*)**

PureScript is an extremely promising programming language, but for now, the ecosystem is too poor. I am sure it will improve as fast as it improved in the last years. For now, however, only people with an already solid base on functional programming can really have the strength to use this language in some serious production-ready project.

I will follow the project though. We need this kind of stuff in the front-end world.

**(*) UPDATE 2nd Oct: **This was a weekend project/challenge. I avoided asking questions on StackOverflow, or chats on purpose. First, I had no time to wait for an answer and I’ve never been really blocked on one issue for a long time. Second, and more important, I think that knowing how many problems you can solve without interacting actively with the community is a good indicator of the state of project’s documentation.

Said that, I feel I was partial in this. It is true that documentation can be greatly improved, but obviously, when documentation fails, there is the community. PureScript community is small, but judging from the amount of feedback on this article, I can assure you that PureScript community is very dedicated!

One of the problem, though, is that** there is no link to any community group/forum/chat on the PureScript homepage! **(but they are listed at the end of the README on GitHub). For this reason, and to atone for my sins, I will link them here too!

- There is the mandatory subreddit: /r/purescript.
- Then, there is a channel in the Functional Programming Slack Group.
- Finally, there is a google group. However, it does not seem to be very active.

L'articolo A Dwarf Fortress calendar in PureScript + Halogen sembra essere il primo su Davide Aversa.

]]>L'articolo Crash course on basic ellipse geometry sembra essere il primo su Davide Aversa.

]]>Because I started a small series about astronomical algorithms and the magic of math in space, I think we need to cover an important prerequisite. In the series, I will talk a lot about ellipses (duh), I will move from the **semi-axis majors**, to the **periapsis**, to **eccentricity**, to ellipse’s **center** and ellipse’s **foci**. I am concerned that things can get more complicated than expected if the readers does not know many of the geometric properties of the ellipse. For this reason, I put here this *vade mecum *on the ellipse geometry. A summary with all the basic points and lengths. A place that I can link everywhere I need to refresh a definition.

This only scratch the surface of ellipse properties. But I think it is enough for what we need now. So let’s start from the beginning.

*You can click on the ellipse elements to jump to the corresponding definitions below.*

We all probably know the classic ellipse definition: an ellipse is the set of points for each of which the sum of its distances to two **foci** is a fixed number. If we call this two special point and , then, the ellipse is the set of points such that

In the formula, is the **semi-major axis**.

However, you probably know better the analytic formula of an ellipse in the Cartesian plane:

Where is, again, the semi-major axis and is the **semi-minor axis**.

The ellipse center is usually defined by . This is the mid-point between the two foci and . As a consequence, one formula involving the center is the obvious

For what orbital mechanic is concerned, we can choose ellipse’s center as the origin of the reference frame. However, in general, it is preferred to put the origin in the focus in which the main body is.

As we have seen before, foci are two points along the ellipse **major axis** such that the sum of the distances between any point on the ellipse and the two foci is equal to the major axis itself. Foci are two of the most important points in the ellipse. They are the ellipse equivalent of the circle’s center. They are the locus of major ellipse’s properties and, most important, in astromechanics one of the focus is the place in which we can find the celestial body to which the other body is orbiting around.

In case we only have and as defining parameters, we can find the distance between one focus and the center with:

In this formula is also called **linear eccentricity**.

As I said before, there are many properties involving the ellipse focus. One of my favorite is the fact that any ray passing through one focus and “bouncing” on the ellipse ends passing through the other focus.

These two points represent respectively the closest and the farthest away point to the focus . Obviously we can choose as reference focus and get completely symmetric results. Periapsis () and apoapsis () are very important in astromechanics where they took different names depending on the center of mass of the orbiting system. For instance, when we consider the Sun we call them “perihelion” and “aphelion”; when we consider the Earth we call them “perigee” and “apogee”.

As you can imagine, they are very important when we are dealing with orbits and planets because they are much easier to measure empirically. Moreover, they are enough to compute all the other important points and measures. In fact, periapsis and apoapsis are probably at the center of one of my favorite set of relations ever. Let’s call the distance between the focus and and the distance between the focus and .

If we take the **arithmetic mean** of periapsis and apoapsis we get the **semi-major axis **.

If we take the **geometric mean** of periapsis and apoapsis we get the **semi-minor axis ****.**

If we take the **harmonic mean** of periapsis and apoapsis we get the **semi-latum rectus **.

I find amazing how three different kinds of averages of the same two values can be connected to three such important measures.

Finally, because we can express and in function of and , we can find a very useful formula for the ellipse **eccentricity**.

The semi-major axis is half of the **major-axis**, the line passing through the two foci from one end of the ellipse to the other. It is one of the main parameters of the ellipse and it is best known for being the parameter dividing in the analytic formula.

The major-axis is also the constant to which distances from the two foci sum up in the ellipse definition.

Take the ellipse center and trace a line passing through it perpendicular to the major-axis. The size of this segment is the **minor-axis**. Take half its value, and you get the semi-minor axis . This value is better known for being the parameter dividing in the analytic formula.

The linear eccentricity is the distance between the center and one of the foci. With a bit of algebraic juggling, it is easy to see that this value can be derived by and .

The ratio of and defines the ellipse **eccentricity** . But we will see that there is an easier way to compute than passing through .

Take the minor axis and slide it until it passes through one of the foci. The segment enclosed by the ellipse is called **latus rectum**. Take half of it and you get the semi-latus rectum. This value can be computed from and .

It is probably the less interesting measures. But it may come handy, so it is important to know.

Finally, the **eccentricity ** is another extremely important parameter. Poorly speaking, the eccentricity is a value between 0 and 1 (not included) that represents how much the ellipse is “stretched”. More formally, it measures how much the ellipse deviates from being a circle.

It is formally defined as the ratio between the **linear eccentricity **and the **semi-major axis**.

The reason this value is so important, is that it encodes the “asymmetry” of the ellipse in a dense and beautiful number between 0 and 1. As a result, it allows us to switch between the element of any pair of measures easily. We can use from going from to . From to . From to . And more.

As you can imagine, there are too many ways to compute eccentricity starting from any two parameters. Here they are the most useful ones!

As I said, this is just a fast introduction to some definition we need as a starting point for talking about orbits and stuff. In the next article we will continue our journey into space talking about seasons and their procedural generation.

*Featured image: Ellipses by Joshua “Blargas” Hicks *

L'articolo Crash course on basic ellipse geometry sembra essere il primo su Davide Aversa.

]]>L'articolo WordPress abandoning React: a Facebook horror story sembra essere il primo su Davide Aversa.

]]>You are probably asking yourselves: why? A perfect summary is explained *u/A-Grey-World *comment on Reddit:

Facebook has a clause that says “If you sue us, you loose the rights to use our Patents” in their license to use React (some very popular software).Suing is how you defend your patents. So by giving up your right to sue, you’re effectively giving Facebook the ability to use any patent your company or related companies own.Now they’d never do that, because it would be stupid – the company who’s patents it was using would just replace React ASAP and then Facebook would be exposed. So it’s too risky for Facebook to ever do. But that mean’s there’d be a big redesign cost of replacing React. Super, super tiny risk. But say “don’t worry the risk is tiny!” to your company lawyers…[…]Ultimately, it’s probably fine to use React. Microsoft, Google etc do. However, there’s a good chance many company lawyers will take one look at it and see a huge exposure of the companies IP, and say “nope”. The confusion over the license can easily be seen as risk.And if companies say no to React, they’ll be saying no to any WordPress that uses it. WordPress don’t want that, even if their laywers okayed the license – they know there’s a good chance it’ll impact the update of a React WordPress as companies decide it’s easier just to stay clear of the mess.Like (I suspect) many companies, WordPress just don’t want to deal with all the confusion, and decided not to use React.

To summarize even further. Facebook is going to introduce a very controversial clause in React license. The clause says “if you want to sue us, you must stop using React”. There is probably no risk in using React because of a mutual tacit agreement between you and Facebook to never use that clause. Facebook has no advantage in using it if you are not starting a patent war with Facebook. Nevertheless, even if the risk is *near zero*, it is not *zero*. Many companies want to be legally bulletproof and replacing React when your entire web application is based on React is a huge problem.

Because this controversial clause propagate through WordPress to any company using it, the burden of this choice is multiplied by several orders of magnitudes. Every company using WordPress is accepting implicitly the React license as well. This is something WordPress cannot accept and, as a result, it decided to give up on React.

However, this is problematic. React was the framework of choice for the full interface rewriting of WordPress (Calypso) and it was starting to come into the core WordPress application through the new editor Gutenberg. As you can imagine this is huge slowdown for WordPress that now have to rewrite again everything in a different framework.

While this is a great problem for WordPress the consequences will probably wider. WordPress is the first big company to fall “victim” of the Facebook’s killer clause. This may amplify the resonance of the Facebook’s decision and push other companies to abandon React or, if they are new, to chose directly a different UI framework. Some months ago, Apache itself banned the Facebook-clause from all their current and future software (ant they are a lot).

It may be the beginning of some strange *domino effect*? Maybe. But Facebook does not seem to be worried by this and the clause will not be removed (at least in the near future).

If you are using React, you don’t probably need to worry. As I said, I do not see direct practical effect from this clause for now (unless you plan to sue Facebook for something). Just keep this in mind.

But if you are more cautious or if you just hate the ethical implications of this license change, you need to look to other places.

If you already have a React application, the logical and less painful solution is to fork React at a version before the clause and move on with that. Luckily, there are already some project in this direction like Preact. If you look for something compatible with React but Facebook-free. You can try that.

The alternative is to go in a complete different direction. The hyped framework of this time is Vue.js without any doubt. I cannot give you a detailed review because my front-end experience is very limited. I can just say that **TypeScript** support in Vue.js is insufficient. React works smoothly with TS. Vue.js and TS together are a much more painful experience. I really hope they can improve it in some next release.

These are the facts. Now come a couple of personal opinions. The first one is to **never trust Facebook**. This clause is a legendary *dick move*. React is probably one of the most successful UI framework of these years. Pushing a clause so controversial (and unacceptable, for me) when React already so deep in many applications and is just hard to switch to something else, is horrendous. Is legit, of course, but horrendous.

This is probably the Facebook version of Microsoft’s *“Embrace, extend, and extinguish”*. If they did that now and if nobody protests, who stop them from doing worse the next time? I can go over with my Facebook rant, but I’ll probably go too much out of topic.

L'articolo WordPress abandoning React: a Facebook horror story sembra essere il primo su Davide Aversa.

]]>L'articolo Computing planetary orbits between two celestial objects sembra essere il primo su Davide Aversa.

]]>Anyway, during this process, I am reshaping and producing many many formulas. I am sure that in six months I will forget all the motivations behind them. For this reason, I want to try to save some of them here. In this way, I will have a good place where to look back at my notes and, moreover, I can be useful to other people trying to do some low-accuracy orbital calculations.

I want to start from the beginning: **orbital period and orbital trajectory.**

First of all, we need to have a clear idea of what are the orbital elements. If we consider a true three-object system (e.g., sun-earth-moon) things get very, very messy. A complete orbital diagram for such system involves an impossible number of angles, axises and intersections between imaginary lines.

**For this reason, for simplicity, we want to start from a very basic two-objects system: a point object orbiting around another (more massive) point object.** In this situation, there are no rotation axises, there are no orbital inclination. There is only one object orbiting the other.

- The orbit of a planet is an
**ellipse**with the star is at one of the two foci (F in the above diagram). - A line segment joining a planet and the star sweeps out equal areas during equal intervals of time.
- The square of the orbital period of a planet is proportional to the cube of the semi-major axis of its orbit.

We will look at Law 2 and 3 later, for now the first law is enough. **Orbits are elliptical.** This seems easy but keeping track of non-uniform movement along an ellipse is more complicated than you think. Assuming that you know how an ellipse is defined (eccentricity, major axis, minor axis, and so on) we are interested in one measure: the **true anomaly**. This is the angle between the planet P and the star in F. In the diagram, this is the angle *f*.

As you can see, if we know this angle, and we know the orbit dimensions, we know where the planet is on the orbit. Because orbit dimensions are usually given (there are many sets of equivalent dimensions, for the moment, we don’t care), the only thing we need to determine the planet position over time is the true anomaly over time.

Computing the true anomaly over time is not a trivial task. Before we can compute the true anomaly, we need some intermediate steps. The first stop is computing the **mean anomaly**. Imagine a circle with radius (the *semi-axis major *of the ellipse, in other words, half the “width” of an ellipse). Imagine now a body orbiting this circular orbit with constant speed (because circle) with the same *period* () of the real planet. The angle of this imaginary body at time is the **mean anomaly**.

Computing the mean anomaly over time is very easy, after all, it is moving with constant speed. Therefore:

Now, we introduce a new angular measure, the **eccentric anomaly**. This is the angle shown in the above diagram. It represents the angle between the circular orbit center C and the kind of *projection *of the real planet position on the elliptic orbit on the imaginary circular orbit we defined before. To compute this value we use the *Kepler’s Equation:*

Note that is just the ellipse eccentricity. This is a wonderful relation between the mean anomaly and the eccentric anomaly. However, give this formula, it is impossible to express in closed form. For this reason, there are a thousand of methods for approximating this value.

The one I prefer is the recursive one. In short, we rewrite the formula as

You can note that we have expressed a kind of recursive function. Thus, we can replace on the right with the same formula again

And we can continue the process over and over, every time obtaining a closer approximation. I usually do this process 3 times, and, with some trigonometric magic trick, I get the following formula

Finally, we can rewrite this formula in a way that it is easier for the computer (it is faster, it performs less *sin *and *cos*, and reduce the floating-point errors):

Cool. Now it is time for the last step of our process. We have seen that the eccentric anomaly is just the “angle between the center of the circular orbit and the projection of the real planet on it” (this is a very loose definition, but I think it is the most intuitive approximation). Therefore, to find the real angle we just to “project the planet back” to the elliptical orbit. Fortunately, this is much less painful problem. The relation between the two angles is given by the following formula (where is the true anomaly).

This can be rewritten as

A formula that finally concludes our journey.

This is everything for now. I’ve put many concepts on the table and, before we move on, we need to be perfectly at ease with the tree angular orbit measurements, the *anomalies.* In the next articles we will see how to use them to compute some basic orbital events (such as equinoxes) using these formulae.

I have some book that was extremely useful in this task:

**Astronomical Algorithms by Jean Meeus****.**This book is a 20+ year old book for writing astronomical algorithms on a computer. For this reason the code examples are in BASIC. Yep. But the math and the core of the problems are all there. This book is still a gold mine if you are interested in this kind of works.**Introduction to Geodetic Astronomy by T.B. Thompson.**There is no link for this, but you can use your web search ability to find a PDF version, somewhere. This is an even older book (December 1981) but, at least, there is no obsolete code. The book is all about the celestial math governing the Earth and the Sun (and other orbiting spheres).- If you want something more recent, there is
**Astronomy on the Personal Computer by Montenbruck and Pfleger**, but I didn’t read it. I know many people suggested me this book and the examples involve a more recent C++, but it costs**a lot**for a book that I need for a side project.

L'articolo Computing planetary orbits between two celestial objects sembra essere il primo su Davide Aversa.

]]>