Most of the development work around MongooseIM is done by working closely with our clients. Many of the new features in MongooseIM 1.6 are the result of building and testing around particular client requirements. We are particularly excited about this release, as it means the beginning of an improved team dynamic, new high-value features, and many improvements.What’s new in MongooseIM 1.6 Riak KV
MongooseIM 1.6 introduces support for Riak - the scalable, fault-tolerant, distributed database written in Erlang. For this release, four modules can be configured with Riak, but more are on their way.
Currently, MongooseIM supports Riak as a storage backend for:
This brings more flexibility in terms of database choice for any infrastructure, as you are free to choose between a RDBMS or NOSQL data store.Powerful metrics, DevOps love
Version 1.6 offers very powerful metrics and monitoring infrastructure. For the underlying instrumentation, MongooseIM relies on Exometer. It reports OS-level and Erlang VM metrics, in addition to business metrics. You can then push those metrics to any ingesting and graphing system, such as Grafana or Kibana.
This greatly improves DevOps' visibility for managing systems, no matter the size of the installation. Other departments will benefit from having more data to dig into (big data, analytics), thus better understanding the end-users and eventually discovering new opportunities to improve UX, core competencies, and selling points.
Additionally, we have improved how session data from the DB is cleaned up when a node goes down. It eases the support for more database backends in the future, and it brings better cluster handling for DevOps.
Also, it is now possible to change log-level dynamically with a custom log path. For DevOps, once again, it helps consolidated logging for easier deployment, administration, and analysis.
Get your own easy-to deploy Docker image of MongooseIM. As it is still experimental and will improve over time, please handle it with care: https://hub.docker.com/r/mongooseim/mongooseim-docker/. Give it a test drive, come back to us with feedback or questions and tell us and what you'd like to see in the future. Feel free to fork it from: https://github.com/ppikula/mongooseim-docker.Additional high value improvements
Extensive technical investment means we can continue to deliver a better MongooseIM with the Open Source community: we made changes to the core and integrated our test suite, the first of its kind, in the hope of seeing even more Open Source contribution.
We encourage everybody to review the release notes on GitHub.Coming up next
MongooseIM 1.6 will continue to bring many benefits such as: extensive and powerful business metrics, flexibility for DevOps, a rock-solid code base and stability for server and backend developers, more conformance for client developers, and extensive testing for all.1.6.x maintenance series
We will be working on a maintenance version, 1.6.1 (and perhaps a 1.6.2 later), with more complete Riak support, and of course the usual bug fixes, optimisations, and general improvements.1.7 and subsequent releases
The development cycle for version 1.7 has already begun! We will focus on cloud, mobile, and testing. That is all we can share for now, but feel free to tell us what you think. We will also put more effort and commitment into making release schedules more predictable - in fact we already are.Invitations Be the first to get informed!
We have set up a new public mailing-list for all announcements of major events happening on the MongooseIM front. Expect one or two emails per month, the archives are free and open. We highly encourage you to subscribe here: https://groups.google.com/d/forum/mongooseim-announce
Click on the blue button "Join group":
Then click in "Email delivery preference" on "Notify me for every new message":Contribute...
We received some very valuable contributions over the last months, and we would like to thank all of the developers who took part in the delivery of version 1.6: @rgrinberg, @vooolll, @syhpoon, @mweibel, @Stelminator, @larshesel, @ruanpienaar,@aszlig, @jonathanve, @gmodarelli.
We hope to see everyone contributing again in the coming months - you have all the power to closely participate in this fully open source project.Influence our roadmap!
We encourage you to comment below this blog post, tell us what you think about this release, and where we should go in the future. You can also give feedback on GitHub through “Issues” - we are keen on gathering common problems and goals to provide solutions.
We believe the best thing to do is fork the project, and make a pull requests when you are happy. We will discuss with you, and include that into a milestone, entering our fully automated testing process.
In this new talk from ejabberd Advanced Erlang Workshop, Christophe Romain goes into the details of ejabberd Pubsub implementation. He explains the Pubsub plugin systems and how to leverage it to optimize ejabberd PubSub for your own use cases.
This talk will teach you how to get more performance and scalability from your Pubsub implementation.
You can watch the full talk online:
You can also browse the slides:
If you like our videos, you may consider joining one of our next Advanced Erlang Workshop:
We’re pleased to announce that Swift 3.0 has reached the release candidate stage, this means we have fixed all known issues and implemented all features we intend on having in 3.0, if no critical issues are found in these builds we will do a full release in the near future. The packages can be downloaded from the releases page and a full list of new features can be found in the 3.0 changelog but highlights include the ability to authenticate using certificates, support for the OS X notification center as well as the secure transport mechanism.
We encourage everyone to get the new build and try it out, and tell us about any bugs they should come across as we work towards release candidates and a final release.
This is the first published video for ejabberd Advanced Erlang Workshop.
In this video, I talk about the history of XMPP protocol extensions for group messaging and detail each approach for message broadcasting, from multicast to pubsub.
I also explain how to overcome the limitations of Multi-User Chat protocol to build mobile Whatsapp-like group chat services with ejabberd.
You can watch the full talk online:
You can also download the slides: ejabberd state of the art to implement one-to-many chat services
If you like our videos, you may consider joining one of our next Advanced Erlang Workshop:
Welcome to the 7th issue of our newsletter. Last month was pretty interesting, with several platform news and longform analysis. You can subscribe to the XMPP Radar newsletter and receive it in your inbox at the end of each month. Here are the links we found interesting in January:ejabberd massive scalability: 1 node — 2+ million concurrent users
How Far Can You Push ejabberd? From our experience, we all get the idea that ejabberd is massively scalable. However, we wanted to provide benchmark results and hard figures to demonstrate our outstanding performance level and give a baseline about what to expect in simple cases.How to chat anonymously online
Chatting anonymously on the internet isn’t used solely for shadowy criminal hackers and government operatives. From journalists to congressmen, learning how to adjust the privacy of our digital communication is becoming an ever more important skill.Using wot.io protocol adapters with node-red
One of the great tools available to developers who use the wot.io Data Service Exchange is the protocol adapter framework. The wot.io protocol adapters make it possible to take existing applications which speak a given protocol, and use them to generate new data products.Firefox can now send push notifications just like Safari and Chrome
The latest version of Mozilla’s Firefox browser has a trick up its sleeve that could save you time and battery: push notifications. This means, with your permission, that websites can send messages that appear directly on your desktop — even if you don’t have that site open.WhatsApp and Messenger are going to start monetizing
When Facebook acquired WhatsApp in 2014, investors wondered: With the messenger service seemingly committed to a subscription model for its primary source of revenue, could the social network be more open to this form of revenue than investors thought? Apparently not.Chatbots are back and they’re about to take over
And it’s going to be weird. As the author writes: Miss Piggy doesn’t like it when I ask her about Kermit. We’re having a conversation in Facebook’s Messenger app, and the famous Muppet is trying to convince me to watch her fictional talk show Up Late With Miss Piggy.If your web site offers live chat, be prepared for hackers
Live chat has become ubiquitous as a sales and support tool for software as a service or cloud based services. Entire businesses have been built around providing live chat. However, there are times when you need to be cautious at the help desk.Prosody 0.9.10 released
Prosody team is pleased to announce a new minor release from our stable branch. This release fixes another dialback security issue. We strongly encourage all Prosody servers to upgrade as soon as possible.
We have been very pleased with the feedback and the large interest we have received for our XMPP Academy video series.Announcing ProcessOne IoT Studio
Following that success, we are now launching IoT Studio, a tutorial / Q&A-style video series on building the Internet of Things. It will be a great opportunity to get your questions answered by experts of connected devices.Connecting things
How to connect and control things ? How can we provision things and manage their identity ? What are the available protocols to build the Internet of Things and what are their weaknesses and strengths ? This is the type of questions we plan to answer during those videos and many more.
Our first session will take place on February 16th at 6pm CET. If you are a ProcessOne customer, you can save the date, as you will soon receive a free registration link to attend live.
For all IoT developers, you are very welcome to send us your questions before Feb, 15th through our contact form. The topic is far broader than just XMPP, so please, be sure we will address a very wide scope of matters. We will select the most interesting topics / questions to reply to.
We are waiting for your input !
In the meantime, you can catch up on the our other video series, XMPP Academy sessions:
We just got back from Brussels, where we had an amazing time at the XMPP Summit 19 which was held on 28th and 29th of January, and at FOSDEM on 30th and 31st. Erlang Solutions proudly represented the MongooseIM XMPP server through us: Michal Piotrowski, Tech Lead, and Nicolas Vérité, Product Owner. With all the excitement still fresh in our minds, we sat down and did a review of the most interesting things we saw and heard at the events.XMPP Summit 19
Each XMPP Summit is an opportunity to talk face to face, which is always good for trust, understanding, and… social events. We had really in-depth, detailed discussions with all the XSF protocol experts, and came out with a few great insights.
These points are all subject to existing, or future XEPs:
Overall, it has been an enlightenment for us, and we came back with loads of ideas on how to continue the progress on each item.FOSDEM 2016
For us, the first day focus was the Real Time devroom.
Our contribution was Nicolas’ talk: "The state of XMPP and instant messaging, The awakening". Its aim was to present a market analysis: the three generations of Instant Messaging, and the ‘Trough of Disillusionment'. Also introducing our view on a few of the most important tasks that need to be done in the near future of XMPP messaging.FOSDEM 2016: The State of XMPP and Instant Messaging, The Awakening from Nyco
We also enjoyed Matthew Wild’s talk, which had a lot of common denominators, such as the state of XMPP, and the actions to enhance the situation, outside the only scope of specifications.
On Sunday, Daniel Pocock, who deployed XMPP servers for the Fedora and Debian communities, further enlarged the emerging vision to SIP and WebRTC, in the main track. He proposed to build a common mission statement, and a list of high-level requirements.
One of our favourite features of FOSDEM was the real-world chatting with attendees while comfortably sitting on very large cushions in the Real Time lounge.
Our shared impression of the two events is that globally, there is a massive wave of renewed energy coming from the XMPP community, but also from the free and open source software developers attending FOSDEM.
In a recent blog post Parse has announced the retirement of its service. This action is in turn complemented with the provision of exporting tools and the open source release of the Parse Server implementation. However, according to the details about data exporting, Parse Server does not implement any push notification functionality and they recommend migrating to a different push provider.
Parse shutdown was quite a shock for everyone, but fortunately, if you are interested in a SaaS platform to keep feeding your existing Parse-based applications with push notifications, we are here at Boxcar willing to help as we did already with our ZeroPush migration tools.
If you need help just contact us and our team will guide you to migrate your existing mobile applications!
After the XMPP Summit and the FOSDEM in Brussels this week-end (on Thu 28 + Fri 29 and Sat 30 + Sun 31), I can now expose and share my feelings.
Having been an XMPP evangelist for many years, at some point I simply lacked motivation and belief. Since I was hired at Erlang Solutions as a Product Owner for the MongooseIM XMPP server, I met a highly competent and clever team. I was wondering since, if I should wear again my promoter hat. Now I know.
At the XMPP Summit, a lot of discussions took place, and we definitely say we have moved one strong step forward (MIX, E2E, simpler reconnection). It has been a highly productive edition, with applied core values, such as openness, ownership and responsibility.
At the FOSDEM, three interesting talks took place
If you allow me to enlarge the view outside the XMPP world:
I believe we have here the basis of renewal, with some initiatives that need to include a larger crowd for more feedback, in order to build a real vision.
Can you please participate in this little poll?<a href="http://polldaddy.com/poll/9293551">Take Our Poll</a>
We are pleased to announce a new minor release from our stable branch.
This release fixes another dialback security issue. We strongly encourage all Prosody servers to upgrade as soon as possible.
Successful exploitation allows an attacker to impersonate your server on the XMPP network. A full security advisory can be found here. Many thanks to Thijs Alkemade for discovering and reporting the issue.
We also have a number of other fixes and improvements made since 0.9.9.
A summary of changes in this release:Security
As usual, download instructions for many platforms can be found on our download page
If you have any questions, comments or other issues with this release, let us know!
I've just released and uploaded Smack 4.1.6 and 4.2.0-alpha3 to Maven Central.
Smack 4.1.6 contains a few important bug fixes for Smack's stable branch: 4.1. The changelog can be found here: Smack Changelog
The third alpha version of Smack's upcoming development branch contains a ton of fixes and new features. Adventurous users are encouraged to explore the API improvements. But, as the version name suggests, don't use an alpha version for production deployments. A provisional and incomplete list of Smack 4.2's highlights can be found at Smack 4.2 Readme and Upgrade Guide · igniterealtime/Smack Wiki · GitHub
We've had quite some feedback from our recent 4.0.0 release. As is to be expected in a release that has many changes, some issues sneaked in. To address most of them, we released Openfire 4.0.1 today.
This release contains various changes, but most importantly fixes problems introduced in 4.0.0 related to LDAP and certificate store management. The complete changelog is available here: Openfire Changelog
You can download the new release from the download section on the IgniteRealtime.org website: Ignite Realtime: Downloads
The SHA-1 checksums for the various downloads:PlatformSHA-1 ChecksumFilename
openfire_4_0_1.dmgWindows008942ca0f5df38e3a5e69841e97971d658823a7openfire_4_0_1.exeTarball0de4d1e9533564484decb88b5d59238722774ed7openfire_4_0_1.tar.gzZIP783155b44e7fc5819b2dddc100acdb925ff5e736openfire_4_0_1.zipSource (Unix)cfbada1c5a25637f0f8c85fef629ff090767e2abopenfire_src_4_0_1.tar.gzSource (ZIP)0e21d915876d6e08d649714f66141d4acf54a09dopenfire_src_4_0_1.zipRedhat / CentOSac7328da9faa7ef2614d51195ad711cc2877e778openfire-4.0.1-1.i386.rpmDebian / Ubuntu1084d7e67299b3ddc20db4bf2a2a6d6bfde29a93openfire_4.0.1_all.deb
XMPP is federated, similar to email, which means different domains can connect to each other. Back in the early days, when a server initiated a connection to a server, the initiating server could be reasonably sure it connected to the right place as it resolved the DNS records (remember, it’s 1999). But the receiving server has no guarantee on whether the incoming connection was actually from the domain it claimed.Background
Thus dialback was introduced. The mechanism is simple: the initiating server sends a key (the dialback key) to the receiving server. Then the receiving server connects back to the server that the initiating server claimed to be and sends the key. That server replies whether that key is valid or not.
This mechanism creates quite a barrier for spoofing, phishing and spamming. Of course, it is not secure against attackers that can manipulate DNS results (as the receiving server has to resolve the initiating server’s domain), but that requires an active attacker.
To generate dialback keys, many implementations work as follows: they generate a random string (the dialback secret) and then calculate the key as a hash of the dialback secret, the domains and the stream ID. When verifying an incoming connection, servers often don’t check if an outgoing connection is actually pending, they simply recompute the key and compare it: the assumption is that everyone who has the dialback secret is authorized to make connections. The stream ID is generated by the receiving server to protect this from replay attacks.
Generating keys this way makes it easy to support clustering: all the servers for a domain share the dialback secret, so the initiating server and the server doing the validation (called the authoritative server in the XEP) don’t need to be the same. When not using clustering, it would be annoying to require the user to generate a secret, so many servers automatically generate one.Vulnerabilities Prosody
The first place where I found the problem was in Prosody. Prosody uses a randomly generated UUID as the dialback secret. However, Lua itself doesn’t include a cryptographically-secure pseudo-random number generator and neither do any of the required dependencies. So instead they took entropy from the few sources that were available and feed that iteratively into a custom PRNG.
The amount of entropy those sources contain varies between platforms, but in the worst case could be pretty low. The initial seed uses the current time, the amount of CPU time used so far and a pointer to a new Lua table. The moment a server restarts is quite easy to observe (all connections close and Prosody even sends its uptime to those who request it) and the CPU time used just after the server has restarted is typically very low (0.01-0.04). The pointer can take a few values, but not much more than 219.
The dialback secret is the very first thing the PRNG generates, so when the entropy was still minimal. This means the secret is easy to brute-force. A very naive implementation could do it in around 6 minutes.
All an attacker needs is a real domain to obtain a dialback key from the vulnerable server, which they can then use to brute-force the dialback secret and impersonate the vulnerable server to any other server, without any network interception or modification.
This was fixed by using /dev/urandom/ instead (and dropping support for platforms where /dev/urandom doesn’t exist): https://hg.prosody.im/0.9/rev/c633e1338554 and released in 0.9.9.
After I reported this, I wondered how it would work in servers that support clustering, so I looked over the ejabberd source code. I noticed ejabberd generates a random number to use as the key and then stores it in a database together with the target domain.
However, this number was also not generated using a CSPRNG, it was generated using random:uniform()1, just like stream IDs and random resources. Observing 2 stream IDs is enough to unambiguously compute the internal state from the PRNG in less than a second and then compute all numbers it has generated and will generate.
The number is stored in a database with the domain, but not with the stream ID, so it can be used multiple times to authenticate a different stream. The attacker can guess which number was used for this connection and open a second connection to the target server, which will then be authenticated successfully. So the attacker can only impersonate to the server the key was originally generated for and only while the original connection is open (as otherwise the entry gets deleted from the database). It will likely take a few guesses (but it’s unlikely anyone would notice), so it’s harder to pull of than the Prosody attack, but not impossible.
This was fixed by using crypto:rand_uniform instead: https://github.com/processone/ejabberd/commit/fb8a51136519a190145265736c4243095e2516ec and released in 16.01.Openfire
I had reported the issue hoping to get a coordinated release ready. But then I wondered about the other implementations out there, so I checked out Openfire.
There, I discovered that the implementation is very similar to Prosody, using a dialback secret to derive dialback keys. However, dialback secrets were generated using Java’s Random class, which is also not cryptographically-secure.
I did not try to brute-force this one myself, but there is enough documentation to show that it’s easy: http://stackoverflow.com/a/11052736/353187.
This was fixed by using SecureRandom instead: https://github.com/igniterealtime/Openfire/commit/ccfee2eac3f45cfcce31acb1b0132e76c122557d and released in 4.0.0.Tigase
Then I looked at Tigase, which was the only one that actually did things right, using a CSPRNG and following the recommendations of XEP-0185.Impact
An attacker opening a connection that has incorrectly been validated using dialback means the attacker can send fake messages from any user on the vulnerable domain (or subscription requests, or file transfers, etc.). However, the attacker does not receive stanzas back: only the initiator of a stream can send stanzas, unless XEP-0288 is used.
The three implementations Prosody (including Metronome), ejabberd and Openfire together make up a large part of the network. This means that if you haven’t upgraded in the last month you really should get to it!
This PRNG actually has 96-bits of internal state, which is close to enough. However, after the first number, only ~244 states are possible. Though this could be brute-forced, some number theory is enough to compute the internal state from a full output value.↩
This Thursday 28th and Friday 29th of January 2016, it is the 19th XMPP Summit, that will be held in Brussels, and I will be there Erlang Solutions, especially with Michal Piotrowski, the tech lead of MongooseIM.
This Saturday 30th and Sunday 31st, it is the FOSDEM. I will give a talk, in the Real-Time devroom: « The state of XMPP and instant messaging, The awakening« .
From our experience, we all get the idea that ejabberd is massively scalable. However, we wanted to provide benchmark results and hard figures to demonstrate our outstanding performance level and give a baseline about what to expect in simple cases.
That’s how we ended up with the challenge of fitting a very large number of concurrent users on a single ejabberd node.
It turns out you can get very far with ejabberd.Scenario and Platforms
Here is our benchmark scenario: Target was to reach 2,000,000 concurrent users, each with 18 contacts on the roster and a session lasting around 1h. The scenario involves 2.2M registered users, so almost all contacts are online at the peak load. It means that presence packets were broadcast for those users, so there was some traffic as an addition to packets handling users connections and managing sessions. In that situation, the scenario produced 550 connections/second and thus 550 logins per second.
Database for authentication and roster storage was MySQL, running on the same node as ejabberd.
For the benchmark itself, we used Tsung, a tool dedicated to generating large loads to test servers performance. We used a single large instance to generate the load.
Both ejabberd and the test platform were running on Amazon EC2 instances. ejabberd was running on a single node of instance type m4.10xlarge (40 vCPU, 160 GiB). Tsung instance was identical.
Regarding ejabberd software itself, the test was made with ejabberd Community Server version 16.01. This is the standard open source version that is widely available and widely used across the world.
The connections were not using TLS to make sure we were focusing on testing ejabberd itself and not openSSL performance.
Code snippets and comments regarding the Tsung scenario are available for download: tsung_snippets.mdOverall Benchmark Results
We managed to surpass the target and we support more than 2 million concurrent users on a single ejabberd.
For XMPP servers, the main limitation to handle a massive number of online users is usually memory consumption. With proper tuning, we managed to handle the traffic with a memory footprint of 28KB per online user.
The 40 CPUs were almost evenly used, with the exception of the first core that was handling all the network interruptions. It was more loaded by the Operating System and thus less loaded by the Erlang VM.
In the process, we also optimized our XML parser, released now as Fast XML, a high-performance, memory efficient Expat-based Erlang and Elixir XML parser.Detailed Results ejabberd Performance
Benchmark shows that we reached 2 million concurrent users after one hour. We were logging in about 33k users per minute, producing session traffic of a bit more than 210k XMPP packets per minute (this includes the stanzas to do the SASL authentication, binding, roster retrieval, etc). Maximum number of concurrent users is reached shortly after the 2 million concurrent users mark, by design in the scenario. At this point, we still connect new users but, as the first users start disconnecting, the number of concurrent users gets stable.
As we try to reproduce common client behavior we setup Tsung to send “keepalive pings” on the connections. Since each session sends one of such whitespace pings each minute, the number of such requests grows proportionately with the number of connected users. And while idle connections consume few resources on the server, it is important to note that in this scale they start to be noticeable. Once you have 2M users, you will be handling 33K/sec of such pings just from idle connections. They are not represented on the graphs, but are an important part of the real life traffic we were generating.ejabberd Health
At all time, ejabberd health was fine. Typically, when ejabberd is overloaded, TCP connection establishment time and authentication tend to grow to an unacceptable level. In our case, both of those operations performed very fast during all bench, in under 10 milliseconds. There was almost no errors (the rare occurrences are artefacts of the benchmark process).Platform Behavior
Good health and performance are confirmed by the state of the platform. CPU and memory consumption were totally under control, as shown in the graph. CPU consumption stays far from system limits. Memory grows proportionally to the number of concurrent users.
We also need to mention that values for CPUs are slightly overestimated as seen by the OS, as Erlang schedulers stay a bit of busy waiting when running out of work.Challenge: The Hardest Part
The hardest part is definitely tuning the Linux system for ejabberd and for the benchmark tool, to overcome the default limitations. By default, Linux servers are not configured to allow you to handle, nor even generate, 2 million TCP sockets. It required quite a bit of network setup not to have problems with exhausted ports on the Tsung side.
On a similar topic, we worked with the Amazon server team, as we have been pushing the limits of their infrastructure like no one before. For example, we had to use a second Ethernet adapter with multiple IP addresses (2 x 15 IP, spread across 2 NICs). It also helped a lot to use latest Enhanced Networking drivers from Intel.
All in all, it was a very interesting process that helped make progress on Amazon Web Services by testing and tuning the platform itself.What’s Next?
This benchmark was intended to demonstrate that ejabberd can scale to a large volume and serve as a baseline reference for more complex and full-featured platforms.
Next step is to keep on going with our benchmark and optimization iteration work. Our next target is to benchmark Multi-User Chat performance and Pubsub performance. The goal is to find the limits, optimize and demonstrate that massive internet scale can be done with these ejabberd components as well.A Few Words on ejabberd Business Edition
ejabberd Business Edition did even better than ejabberd Community Server. The memory footprint was slightly lower overall (5%). However, if you consider that with eBE we kept a part of the data in memory, we did a lot better. We used P1DB instead of MySQL for rosters and authentication storage. P1DB is a database developed by ProcessOne. It is designed especially for ejabberd to meet the needs of large cluster distribution and replication. It includes a mix of memory storage, fast disk mapping and built-in data replication across ejabberd nodes. P1DB is built into ejabberd Business Edition.Join Us at Advanced Erlang ejabberd Workshop
Next ejabberd workshop organized by Advanced Erlang Initiative takes place on January 26th in Krakow. Let’s meet there!
MongooseIM 1.6.1 is now out, and we are pretty happy with the work we've done. This release starts the maintenance series 1.6.x. MongooseIM 1.6.1 is about "more": more Riak KV, more tests, more enhancements. Read below to find out what's new.Riak KV
We extended the support for Riak KV by continuing the work with modules such as the roster (the XMPP contact list) and XEP-0012: Last Activity (a pretty venerable XEP!). Riak KV support for the remaining modules will be completed with release 1.6.2.Tests
Test coverage has been significantly improved and extended with the latest release: admin module, encrypted S2S, C2S ciphers, ACL, inband registration. We already have a solid, clean code base but you can never have enough of a good thing, so test coverage brings you even more confidence.Enhancements and fixes
Many bugs got smashed in 1.6.1. Furhermore, to show our love for DevOps we improved cleanup after a node's death and added a new reliable API for JID manipulation.Final word
Get the latest news: Subscribe to the announcements mailing-list.
Elixir Paris Meetup #5 happened on January 12th. We have gathered a team of faithful and enthusiastic developers, passionate about programming languages.
Mickaël Rémond introduced a new project that nicely demonstrates various programming approaches in Elixir. FastTS project (for Fast Time Series) is the result of the discovery of Riemann monitoring tools during Paris.ex meetup #3. FastTS project takes its inspiration from Riemann and Phoenix to create a monitoring tool and a Time Series router in Elixir.
Fast TS project and talk introduced the following concepts:
Slides for the talk, in French, can be downloaded here: FastTS: Étude d’un outil de monitoring et routeur de métriques en Elixir.
If you speak French, you may also enjoy the video recording of the talk:
Meetup ended with discussion around comparison between Elixir and other languages like Clojure or Haskell.
Next Paris.ex Meetup will take place on March 8th at l’Anticafé. We hope to see you there !
Playing around with emoji over the weekend, I found out that the Unicode consortium did some pretty neat tricks here and there:
Playing around with emoji over the weekend, I found out that the Unicode consortium seems to have done some pretty neat tricks here and there:
This ‘happy new year’ release of ejabberd is the culmination of one year of major improvements. This is yet another milestone for ejabberd, being the starting point of a new phase of cleanup and optimisations for your favourite server.
This release contains security fix for possible server spoofing with brute force attack on the random number generation. Even if the issue is difficult to exploit, it is recommended to upgrade your server if you’re using server-to-server (s2s) connections.
It also includes:
– better groupchat archiving support with MAM
– improved PubSub performances and control
– more shaper capabilities for listeners
– performance optimisation and lower memory consumption of the core XML processing modules
– faster core routines
All our binary installers now provide each ejabberd dependency in its own directory, following installation scheme of standard ‘make install’ process.
Finally, as usual, we fixed bugs and improved many features across the whole server.
As you see with the following changelog, we had a very busy holiday season :)
Here is the full list of changes:Changes Security
As usual, the release is tagged in the Git source code repository on Github.
The source package and binary installers are available at ProcessOne.
If you suspect that you’ve found a bug, please search or fill a bug report on Github.