How to run your own OAuth Identity provider service

Generally, we connect our application against a provider so it can share details about a user. Most of the documentation you’ll find online explains how to use their service, but very few outlines concisely how it is if you want to be your own provider and share state across applications you also manage.

Documentation would generally allow your third party app to let users use their information from another site such as Facebook, GitHub, Twitter, etc.

But what if you wanted to share information across your web applications in a similar way?

This post is a quick summary of how things works so you can get acquainted with the basics.

Whitelist

Big sites aren’t generally one big code repository but a set of separate components. A way to make each component to share your account details is most possibly by making a difference between their own infrastructure and third parties.

If your app were to use an external resource such as Google, the process would end up making Google users to be asked if they really want to share their information with you. This is why they would get a dialog similar to this.

img

While its OK to ask confirmation from a user if he wants to share his details with an external site, in the case of two components from the same site can share this information transparently.

If you are your own Identity Provider, you can configure your relying parties as “whitelisted” so that your accounts system don’t display such dialog.

Becoming your own Identity provider

In the case of WebPlatform.org we wanted to become our own Identity provider and found that we could deploy our own fork of Firefox Accounts (“FxA”) would allow us to do so.

The way its designed is that we have an OAuth protected “profile” endpoint that holds user details (email, full name, etc) as a “source of truth”. Each relying party (your own wiki, discussion forum, etc) gathers information from and ensure it has the same information locally.

In order to do so, we have to make a module/plugin for each web application so we it can query and bootstrap users locally based on the accounts system. We call those “relying parties”.

Once we have relying party adapter in place, a web app will have the ability to check by itself with the accounts server to see if there’s a session for the current browsing session. If it has, it’ll give either an already generated OAuth Bearer token, or generate one for the one for the service in question –the “SSO” behavior.

With the OAuth Bearer token in hand, a relying party (i.e. the WebPlatform.org annotation service) can go read details from the “profile” endpoint and then check locally if it already has a user available.

If the relying party doesn’t have a user, it’ll create one.

Each relying party is responsible to sync its local user and session state based on what the accounts service gives.

Quelques bouts de code pour automatiser le déploiement

Automate ALL THE THINGS!

Ce billet n’est qu’un simple «link dump» pour retrouver parmi plusieurs notes éparpillés. Je compte éventuellement publier la totalité de mon travail dans des projets publics sur GitHub une fois la boucle complétée. Le tout sans fournir les données privés, évidemment.

Faire le saut vers l’automatisation demande beaucoup de préparation et je prends le temps de publier ici quelques bouts de code que j’ai écrits pour compléter la tâche.

Au final, mon projet permettra de déployer un site qui s’appuie sur un cluster MariaDB, Memcached, une stack LAMP («prefork») lorsqu’on a pas le choix, une stack [HHVM/php5-fpm, Python, nodejs] app servers pour le reste servi par un frontend NGINX. Mes scripts vont déployer une série d’applications web avec toutes les dépendances qui les adaptent géré dans leur propre «git repo» parent. Dans mon cas, ce sera: WordPress, MediaWiki, Discourse, et quelques autres.

Requis

  • Instantiation à partir de commandes nova du terminal, crée une nouvelle VM mise à jour et son nom définit son rôle dans le réseau interne
  • Les VMs sont uniquement accessible par un Jump box (i.e. réseau interne seulement)
  • Un système regarde si un répertoire clone git à eu des changements sur la branche «master», lance un événement si c’est le cas
  • Chaque machine sont construites à partir d’une VM minimale. Dans ce cas-ci; Ubuntu 14.04 LTS
  • Système doit s’assurer que TOUTES les mises à jour sont appliqués régulièrement
  • Système doit s’assurer que ses services interne sont fonctionnels
  • Dans le cas d’une situation où une VM atteint le seuil critique OOM, la VM redémarre automatiquement
  • Le nom de la VM décrit son rôle, et les scripts d’installation installent les requis qui y sont affectés
  • Les configurations utilisent les détails (e.g. adresses IP privés et publiques) de chaque pool (e.g. redis, memcache, mariadb) et ajuste automatiquement les configurations dans chaque application
  • … etc.

Bouts de code

Billets inspirants sur le sujet

Notes of my purchase experience with ASUS slim DVD-RW reader SDRW-08D2S-U

How do you feel when you purchase a tech product from a respected brand and it feels sketchy and you figure out, after receiving the desired piece of hardware that it doesn’t fit at all.

What got me even more disappointed is the fact I went through product reviews, read tech specs, shopped around for other products and decided to give a chance.

Before getting in the depth of the topic; I’d like to apologize for the tone of this message. My “gage of good will” (ref: Steve Krug ) is very low and I hope this gets somewhere so it can become an incentive to improve the site and product support medias.

NOTE I have (some) theoretical knowledge about User-Interaction design due to my numerous conversations on the topic with relatives and professional contacts who are in the field of software ergonomics. Its in the spirit to remove pain-points that i’m outlining them. Since I spent some time writing this essay, i decided to also share it on my site.

0. The product

img My original expectation wasn’t very big. I wanted a lightweight, small DVD recorder for my backups. You know, to store files you do not need often. I also wanted to encode what I’m burning on disc so if I ever have to let go of the disc, i can be reassured that the data is not accessible without cracking it.

So, what I was expecting:

  • Small
  • USB2 powered (only one cable)
  • Lightweight

My purchase was then a ASUS SDRW-08D2S-U

And supporting encryption got me close the deal. Sadly, ASUS didn’t do anything for Mac OS users.

The following are problems separated by scenarios I went through while getting acquainted with my new purchase.

1. Product registration

During product registration, it FORCED me to fill ALL fields, including my date of birth. If I do not want to give my underwear size besides my full name and region of the world I am at.

NOTE: After going to account, I could add what I wanted to disclose.

2. Searching for drivers

It says supports Mac OS right? After whipping the CD in my CD-Drive-less MacBook Pro I expected to see some (hopefully bloatware free) software to burn CDs and enjoy the CD crypto offering (more on this later). I found nothing.

I know Mac OS and the underlying BSD architecture has us no need to install any drivers. But since the package said it supported and provided software, I thought I could see if I install them. But found none.

After this, i went the asus.com. Its nice looking, marketing flash Pizzazzz. But not useful.

Its only after 10 minutes searching the site, only finding how to compare another product to buy and cannot find appropriate way to navigate to product, downloads/drivers. I got only promotional navigation. This makes the human who purchased something frustrated.

The funny thing is that I had to use a search engine that is NOT from asus.com (in useful results, again). Found plenty of other places to buy and then I found that page where I made this comment by using the comment form at the bottom.

The fact that the page had no images made me think of two things; one more detail that makes it look sketchy, and that maybe the page has outdated information and therefore would not apply to my case.

3. Disc Encryption

Now that I spent some more time explaining all this, I am very disappointed by the product because it sells crypto, but I guess only under Windows. My last resort will be to create MacOS native encoded DMG files, then burn them. No thanks to ASUS. :(

5. Y-shaped USB

Really?

That’s a first time. I understand the electrical needs and the fact that many laptop USB outlets doesn’t always feed with enough voltage/amps. I was surprised.

I saw NO reviews about this.

6. Reading the manual

img I saw NO reviews about this. The interesting part is that, unlike most people, I actually read the manual and it was a pain to figure out which boxes was in my own language.

It was basically me skipping each boxes, in case it was in either french or english. Just to realize after flipping the full “map” around that I just needed to plug the cable.

Good manuals doesn’t require long text. Have a look at how IKEA deals with multi-lingual instructions.

4. Conclusion

Needless to say that i’m disappointed by the purchase. But in the end, I have a not too big/clumsy, device that I can use to read and burn DVDs.

I really hope that ASUS gears toward improving their software products.

Image credits:
* IKEA instructions, by Mike Sacks
* Product picture is promotional material from ASUS website.

Answers I gave for an article about the impacts of Heartbleed

Credits: Codenomicon

Last spring, I had been invited to answer questions for an article about Heartbleed. It was an invitation extended by my friends at SaltStack. Since I use extensively automated deployment techniques with Salt Stack, they thought i’d have things to say.

I accepted to answer the questions on my personal time as the DevOps lead of WebPlatform.org, rather than directly as a W3C team member. And that, even though I had big help from my manager, Doug Schepers, and a few colleagues from the team. Their help was much useful to review and add stats to my arguments.

The article has been published on Dell.com’s Tech Page One blog but only quotes a few pieces of my original contribution. Here is the full answers I gave to the questions.

What made Heartbleed such a tremendous threat?

It hit more than the epicenter of our whole Internet infrastructure, it proven true every sysadmin worst nightmare and on top of that, it impacted everybody.

OpenSSL is so widely used (estimated 2/3 of the web) that many organizations were impacted. This vulnerability has been introduced as a programming mistake for about 2 years. What gives the chills is that since the person who is exploiting the fail do not need to be the “man in the middle” nor need to gain terminal access to the servers, there are no traces left behind. We therefore had no idea how much it was used and by who.

It’s important to remember that the attacker couldn’t know what data he would get either, or if he would get anything interesting at all. And if the “good guys” didn’t know about heartbleed for so long, probably not many “bad guys” did, either.

In consequence, the world had to make a major effort to update a lot of software and people to replace many passwords, but the major impact is probably more psychological: we don’t know what was stolen and we don’t know if there is another “heartbleed” hidden in our servers.

On the bright side, the old advice we give to end users: change your password often, is still valid. This was a nuisance, for sure, but at least heartbleed didn’t change anything there.

Is there anything about the current computing landscape that made the threat greater than it might have been in the past?

The time it takes to gather infrormation through such vulnerability and the set of tools available today has increased dramatically. Ther are network port scanners, for instance, that can report with high 90-percent accuracy in less than 10 minutes.

When attackers gets insight of a vulnerability, its not hard to get the information they are searching for. In the case of heartbleed, it was as simple as searching for machines listening on TCP port 443 and keep a
list of servers using a given version of OpenSSL that is known to have the mistake. Not something hard to do at all for the attackers.

Todays’ computing landscape has better tools, but so the tools to crack them. A better solution is to strenghten the infrastructure and educate end users. Otherwise, the chain will remain as strong as its weakest link.

If there is a thing that our society should do more is to continue teaching sane privacy techniques such as strong and varied passwords.

Hopefully we’ll get to a point where companies would rely on “harder to gain” information than you mother maiden name or your date of birth.

What were the most important things that companies needed to do in the face of Heartbleed?

Companies that doesn’t maintain adequate server management techniques and tools had a lot of (manual) things to do.

The priority was to upgrade to the patched version all affected parts of the infrastructure, ensure that all system that stores passwords gets new passwords as soon as possible.

Of course, the issue had to be fixed as quickly as possible, and without interrupting the services… or belating deliverables that they were already working on.

Therefore, one of the most appreciated thing a system admin has is a way to manage his fleed of servers remotely. In the old days, we only had SSH —hopefully that one doesn’t have its own “heartbleed” problem too— but we now have tools to execute remotely such as Ansible and Salt stack that can help us quickly get the an accurate picture of the whole infrastructure.

Could a Heartbleed-level/style hack happen again?

Before heartbleed, I was an optimistic person. Besides giving chills to everybody, what heartbleed did is a loud “wake up” call.

It is only a matter of time before we learn something else to arise. Computer security is still based on “hard enough to crack” secrets and an accepted set of annoyances. This is where education and better automation tools are of great help. It’d ease the justification of existence, maintenance, and deployment of such measures to everybody.

How can companies best protect themselves and their customers moving forward?

Education, and automation; in that order. Its better to have a good answer —ideally quickly— than a quick wrong answer.

I hope to see training about security practices more often. Its not the lack of training that is the problem, but the people to take time to learn the techniques. Organizations such as the OWASP, —a group who teaches common security mistakes to web developers— is educating security vulnerabilities.

There are dozens of other potential oops. One of the most common type of security breach is what we call “SQL injection”. That’s yet another programming mistake that sends user input directly to the database server without filtering.

img

In the case of Heartbleed, it was a similar kind of programming error. A way to achieve peace of mind includes among others; Testing, proof techniques, “sandboxing” (protected memory), making software Open Source all helps to catch those errors.

While its still a hard problem to find the errors in all that code, using those techniques has the merit to have more than a limited set of eyes on the code.

But all security can be completely overcome by a small human mistake answering private information to the wrong person. Education about how to validate and detect the real collaborator to the potential thief is another challenge in itself.

Many people are talking about it for ages. The recent events makes their teachings more relevant than ever.

Credits

Heartbleed graphic: Codenomicon
Comic strip: XKCD #327

Introduction to the Hypermedia

I was reading around about how to architect in a scalable fashion a web service. You know, the concept of remote procedure call?

At work, I had a conversation about implementing SOAP with an other service, It struck me that they did not talk about REST. Mostly in today’s distributed system, you may want to think twice about if there is something newer that solves better than a solution designed 20 years ago… there must be things learned

So, I wrote this small introduction to what is REST, and the Hypermedia.

Beware, I am not an expert, just a curious that found a nice video and some links about it and trying to learn and apply it properly.

Actually, I heard it for the first time from Darrell Miller in an impressive presentation that blew my mind. Unfortunately I did not pursue up until a few months ago.

Now, I am at a state where I am starting a project and we have to talk to many nodes, I would like to take this opportunity of starting from scratch and use it’s concepts. I am anxious on how it is going to look like.

As for the difference this post started for; I discussed with my colleagues, I compiled these two descriptions with code example in PHP to illustrate.

Continue reading “Introduction to the Hypermedia”

Crash course about how DNS works and the things you should know about it

I often come with conversations with people and hear them asking how DNS works for hosting their domains. Most of the time, there is plenty of ressources about this. Nevertheless. I felt like I could try to make a nice answer in less than 200 words.

What is DNS?

Just to have everybody on equal grounds, here is some describing facts about domain name resolution that drives the World wide web.

  • Oldest DNS service is the “hosts” file listing basically IP address and name
  • DNS is all about converting “name” into IP address;
  • Registrar is a provider that takes care to register your specification of DNS servers to the ROOT servers

Essentials to know about DNS configuration

Now, the configuration of it. Configuration made of simple flat text files. Format seems cryptic at first but its very straight to the point.

  • Each file is also called a “zone file”
  • A Zone can be created from any DNS server. It is really used ONLY if you specify them at the registrar
  • Entries in a zone represent a subdomain (A,CNAME), a configuration (TXT), or other peers (NS) one per line
  • Each name must end by a dot. Otherwise it gets to be represented as a subdomain of the current zone file name

Some examples

Rougly. The A, and CNAME entries are the essentials to know about.

CNAME is an alias to a A

    domainname.com. IN A 1.1.1.1
    www IN CNAME domainname.com.
    other.domainname.com. IN A 2.2.2.2

Explanations:

  • Both names www.domainname.com and domainname.com in the address bar wil get same IP but you only have to change the A entry
  • Domain entry always end with a dot. Otherwise (like the case of the “www entry) it gets terminated by the zone name (the SOA (Start Of Authority) declaration) not shown.
  • Most important is also the MX and NS entries.  MX is the mail servers and NS the other new DNS servers. Just make sure it follows through

Hope this helps clarify

Mon souhait pour bien écrire sur le web: Lettre ouverte à Druide

Bonjour

Je suis un développeur web et je regarde sur le web s’il existe un service qui me permettrait de faire valider mon texte. Plusieurs gens dans le monde du marketing ne jurent que par vos produits. Mais je ne vois pas d’offre de la sorte autre que WebElixir.

Selon moi, plusieurs développeurs web aimeraient avoir accès a votre service pour ajouter votre outil au processus d’édition leur contenus.

De façon interactive.

Mon point de vue

J’aimerai pouvoir avoir un accès a un service REST avec token OAuth pour envoyer mon texte, et le recevoir révisé.

Je suis un développeur PHP qui utilise Symfony2 et ce qui suit est basé sur les outils que j’utilise en PHP.  Soyez assuré qu’il existe une alternative pour les autres langages du web tel que le .NET, Python et Ruby car les concepts que j’apporte ici sont monnaie courrante d’une technologie web à l’autre.

Potentiel

  • pouvoir avoir une aide directement dans une fenêtre d’édition de l’administration d’un site géré par WordPress (l’un des CMS les plus utilisés au monde).
  • Outil de publication de Tweet avec vérification de syntaxe pour widget de portail d’entreprise
  • Décentraliser l’architecture (WebElixir à sûrement un API via http?)
  • Outil allégé pour téléphones intelligents (seulement supporter le pastebin qui sert d’entrée de texte)
  • Connectivité aux API de iOS, et Android
  • Les services web sont offerts pour presque tout (TwitterFacebookEvernote, Solr (Lucene, index de recherche),  et beaucoup d’autres)

Ébauche préliminaire d’une solution

 

Avez-vous de quoi de similaire? (Autre que WebElixir?)

 

Merci de m’avoir lu.


Réponse reçue de Druide Informatique

Bonjour Monsieur Renoir,

Nous n’avons rien de similaire à ce que vous mentionnez dans votre message, ni d’API pour développer sur la base de WebElixir.

Toutefois, nous vous remercions de votre commentaire. Au fil du temps, nos produits ne cessent de s’améliorer, et ce sont en grande partie les remarques et suggestions de ses utilisateurs qui orientent notre travail.

Les commentaires comme le vôtre sont dument notés et font tous l’objet d’une analyse diligente. Il est ainsi possible que l’amélioration que vous suggérez, ainsi que plusieurs autres, fasse un jour partie d’une version ou d’une édition future d’Antidote ou de WebElixir.

Nos salutations les plus sincères.

(…)

La semaine Des logiciels libres à Montréal «MonDev»

MonDev
Montreal Open Source Week - La semaine des Logiciels Libres de Montréal MonDev

Durant la semaine du 24 au 28 Mai 2010 aura lieu la semaine des logiciels libres MonDev. Pour cette semaine particulière nous allons vous guider sur les activités geek de la semaine.

Au menu: Rencontrer les membres de la communauté des logiciels libres à Montréal, notamment: les gens de PHPQuébec, et de TikiWiki.

Le but principal de levenement est de créer un podium pour les logiciels libres durant. Nous avons décide de faire l’événement durant la semaine du Webcom Montréal 2010 parceque beaucoup de gens viendront de loin pour voir le Webcom et le Make Web Not War et on sent qu’on pourrait faire d’autres événements qui pourraient nous intéresser autant que nos visiteurs.

Continue reading “La semaine Des logiciels libres à Montréal «MonDev»”

Crash Course sur les environnements Java

Je suis actuellement en mode “Documentation” et je me suis dit que le web pourrait profiter d’un résumé, en français, du Jargon de l’univers Java.

Je ne suis pas un développeur Java mais j’ai eu a administrer des serveurs pendant 4 ans, et j’en fais encore aujourd’hui. Ce document résume ma compréhension des usages. Si vous avez des suggestions ou vous voulez me corriger, dites-moi le dans les commentaires et j’ajusterai mon billet.

 

Les versions

La majeure différence entre les Java réside dans leur version et ce qui y est distribué avec. La différence majeure réside entre

  • JDK (Java Development Kit),
  • JRE (Java Runtime Environement), et finalement il existe aussi la
  • JME (Java Mobile Edition) pour les devices mobile.

C’est un peu comme une distribution Windows XP Professionnel ou Windows XP Media Centre.

Il existe plusieurs distributeurs d’environnement Java, notamment : IBM, SUN, OpenJdk. La norme et elles sont toutes basés sur la JVM de SUN Microsystems. L’inventeur de Java.

L’Avantage majeur de Java est le fait qu’il existe des distributions pour toutes les plateformes : Windows, Mac, Linux, Solaris, FreeBSD, etc.

 

Termes fréquemment utilisés

  • « JVM » (Java Virtual Machine) est en fait l’appellation utilisée pour parler de ce qui est exécuté. Ce qui est ci-haut mentionné exécutent toutes des JVM… avec des classes (jar) différentes.
  • « Jar » est en fait, grosso-modo, une archive zippée d’un dossier de classe.
  • « Class » une classe compilée Java.
  • « Container » est en fait ce qu’on appelle un Serveur applicatif. Bref, un serveur http qui roule les classes Java.
  • « J2EE » est un accronyme qui peut être vue comme une spécification technique (penser ISO) fournie par SUN pour les standards d’environnement d’exécution (« Container »).

 

Serveur Applicatif

Il en existe plusieurs. La norme avec les logiciels suivant la tendance Open-Source utilisent la version Apache du container J2EE appelé Tomcat. Atlassian utilise Tomcat dans ses version « self hosted » distribués.

Il en existe d’autres comme Glassfish de Sun Microsystem, WebObjects de Apple, Tomcat de Apache Foundation, JBoss de RedHat, WebSphere de IBM, et bien d’autres.

 

Une classe

C’est quoi?. C’est du code java compilé.

La hiérarchie est faite en fonction du « namespacing » inspiré des standards du DNS. Une classe spécifique Java pour un WebService SOAP pour traduire du français au Klingon pourrait être appelé : i.e. com.renoirboulanger.startrek.klingon.soap.jar.

 

Suggestions

Je pense avoir fait le tour du sujet. Dites-moi dans les commentaires si j’ai oublié quelque chose d’important.

 

Google lance un nouveau protocole pour remplacer http, hello SPDY!

Image de Speedy Gonzales par Warner Bros.Je viens de voir passer la nouvelle mais elle semble pas passer inaperçue (voir Twitter). L’idée serait de voir si on pourrait accélérer les choses en réécrivant un protocoleajoutant des modules pour compléter et améliorer le protocole HTTP (EDIT: J’ai mal interprété) plus adapté que celui écrit il y plus de dix ans.

L’initiative est décrite dans le billet appelé «Let’s make the web faster» (Rendons le web plus rapide) le nom du projet serait Chromium (voir dans ce billet) qui décrit ce qu’il pourrait être fait pour… rendre le web plus rapide.

Le blog «Google’s Chromium» parle de ce nouveau protocole processus qu’ils veulent introduire sera appelé, SPDY, prononcé «SPeeDY». Est déja en prototypage chez Google et apporte déja un résultat de performance de 55% plus rapide :

SPDY is at its core an application-layer protocol for transporting content over the web. It is designed specifically for minimizing latency through features such as multiplexed streams, request prioritization and HTTP header compression.

Continue reading “Google lance un nouveau protocole pour remplacer http, hello SPDY!”

Le “Cloud computing” vulgarisé

Attention: Article technique :)

Une passion que j’ai depuis quelques temps c’est l’automatisation des déploiements dans des projets. Le Cloud-Computing fait partie des concepts qui permettent d’automatiser le travail.

Mais qu’est-ce que le Cloud-Computing en fait?

Voici ma description de vulgarisation personnelle provenant d’un post que j’ai fait dans un Intranet Privé il y a plusieurs mois.

Continue reading “Le “Cloud computing” vulgarisé”

Le fondateur de Netscape est en train de développer un navigateur web nouveau genre

Marc Andreessen is backing a start-up called RockMelt.Article publié dans le NYTimes ce matin. — ça fait maintenant 15 ans que Marc Anderssen a développé Netscape qui a introduit l’Internet a des milions de personnes autour du monde.

Aujourd’hui l’histoire en aurait beaucoup a dire. L’article que j’ai trouvé mentionne que M. Anderssen désire revenir en avant plan et innover encore. Trouvé dans “Netscape Founder Backs New Browser – NYTimes.com

Continue reading “Le fondateur de Netscape est en train de développer un navigateur web nouveau genre”

Mon espace de travail

Ecran Eclipse de configuration Tomcat avec mention "on tsc-lamp-dev"
Ecran Eclipse de configuration Tomcat avec mention "on tsc-lamp-dev"

[Edit: 2009-08-17 je vais mettre a jour ce billet avec une nouvelle version des outils et plus de détails sur comment faire]

Mise à jour et façon différente de fonctionner

Dans un billet plus récent (que celui-ci) j’explique comment le faire pour un Mac.

J’ai discuté lors d’un dîner d’apprentissage avec quelques collegues et montré comment j’ai monté mon setup de dev avec VMWare.

Au bureau on marche, généralement, avec une machine virtuelle qui a tout nos outils d’installé: Rational, la JRE, Tomcat ou Websphere, Eclipse, Cygwin, etc. Personnellement j’ai toujours un shell Bash d’ouvert, tant en Linux qu’en Windows (cygwin).

En tant que Linuxien que je suis, moi et un collègue on s’était dit qu’un moment donné, on pourrait rouler nos vm de developpement en linux point-barre.
Continue reading “Mon espace de travail”

Le futur du PC

UPDATE 2009-08-26: J’ai révisé quelques billets et celui-ci je le trouvait toujours cohérent. Maintenant qu’on a la Réalité Augumentée qu’est-ce qui nous attend pour l’interface utilisateur sur le poste de travail?

Je ne sais pas si vous avez déjà imaginé avoir un ordinateur qui a une interface similaire a celle qu’on voit dans Minority Report. Techniquement les choses semblent bouger assez vite pour qu’on puisse les voir se concrétiser.

Continue reading “Le futur du PC”