10 - More advanced monitoring
Stéphane BortzmeyerAbstract
Of course, everybody automatically monitors his services and machines. But some aspects of monitoring are not always known, and deserve more exposure. This talk will cover some of these aspects, using examples from the Icinga program. (Icinga has a configuration language making easier to manage groups of machines and services.) We'll talk of tests made of sub-tests, with results expressed as "at least M among the N sub-tests must succeed". We'll talk about writing your own tests. And we'll mostly talk about multi-points monitoring. This is because the Internet is a very complex environment. One of the consequences is that the result of a test may depend on the vantage point. A routing (or badly handled filtering) problem can appear on some network operators only. A test from only one vantage point will miss the problem or, on the contrary, will make the problem appear as universal, when it is only local. Multi-points monitoring is therefore important, for instance by using the RIPE Atlas probes, or the satellites of a monitoring program. This implies the ability to synthetize the results of the various tests. This is a technical talk, intended for network administrators.
Biographie de l'auteur
IA agile sur smart blockchain quantique ? manque un peu de big data , de disruptif et on en fait une pleinière ! ;-) --- L'auteur travaille à l'AFNIC, registre des noms de domaine en .fr. Il s'occupe de DNS, de normalisation technique, et de veille technologique (plus particulièrement ce qui concerne l'Intelligence artificielle agile sur smart blockchain quantique à l'ère du big data). Il est l'auteur d'un livre sur les rapports de l'infrastructure de l'Internet avec la politique (« Cyberstructure », https://cyberstructure.fr) et d'un blog parlant de réseaux informatiques (https://www.bortzmeyer.org/).
18 - The geographic information system, a tool for futur infrastructure network management
Stéphane Dannhoff - Jean-sébastien TarotAbstract
The University of Strasbourg metropolitan network is very extensive. It comprises 140 buildings, with 80 kilometres of optical fibre covering an area of 100 km². It even has a European vocation as it is interconnected to the German BelWü network (Baden Würtenberg) through a partnership with ARTE. Given its size and complexity, it had become difficult to envisage the management of infrastructure networks without suitable tools. A specialised geographic information system (GIS) is now essential for planning deployments and operating and maintaining these telecommunications networks. Although GIS are not new, their use in the telecommunications sector owes a lot to the national context, including the advent of FTTH-type products used by many operators, such as Orange and Free Telecom. As a result, tools have been developed specifically for the optical fibre network. We will present the project and the implementation of the GIS at the University of Strasbourg. This project has made it possible to achieve two important objectives: the transition to all-digital technology and the recovery of large amounts of data from existing networks. A major benefit of this tool is the ability to plan major changes in the network. In particular, while updating the Metropolitan Core Network, we were able to improve the reliability of optical routes and develop a project approach for the conveyance of new buildings. The highly illustrated presentation will take you right into the heart of the University of Strasbourg's optical fibres and you will come out very enlightened.
Biographie de l'auteur
Stéphane Dannhoff débute sa carrière dans les années 90 dans le secteur des télécoms à un moment ou l’expansion dans le déploiement des réseaux câblés est au maximum. Dans le secteur privé, il occupe successivement les postes de technicien, projeteur d’étude pour devenir chef de projet en travaux neuf. Au cours de ces années, il acquière une expérience dans le déploiement d’infrastructure de réseaux télécoms fixe et mobile. Il maîtrise tous les aspects de la gestion de projet et son expertise technique s’étend du courant faible au courant fort. Stéphane choisit en 2016 de quitter le monde de l’entreprise et rejoint l’Université de Strasbourg. Au sein de la Direction du Numérique, il est gestionnaire de l’infrastructure du réseau métropolitain Osiris. En 2018, il réussit son concours et devient ingénieur d’études. Depuis, il occupe le poste de chargé de maintenance et d’exploitation. En plus de son activité d’exploitation et de gestion récurrente sur le périmètre de l’infrastructure optique d’Osiris, il est également impliqué dans des projets transversaux immobilier-numérique de grande dimension, en particulier le nouveau Datacenter de l’Université de Strasbourg. Jean-Sébastien Tarot est assistant ingénieur à la Direction du Numérique de l’Université de Strasbourg. Il a plus de 20 ans d’ancienneté à l’Université. Après un début de carrière sur des missions d’exploitation, il a pris des responsabilités dans les projets liés à l’infrastructure de fibre optiques ou du Datacenter de l’Université : * Création du Système d’Informatique géographique de l’Université et applications (objet de présentation à venir) * Mise en place du monitoring du Datacenter * Mise en place de la gestion capacitaire du Datacenter Passionné d’informatique et de géomatique, il aime tout particulièrement explorer des nouveaux domaines techniques.
47 - Use case for an institute-wide L3VPN Renater service
Jérôme BerthierAbstract
Inria's IT team has completed a project to deploy a unified telephony solution through its IP network. In this context, it was necessary to provide all the exchanges related to this infrastructure (signalling and voice flows but also user and administrator access to the telephony portal). How to address and route these TOIP flows to connect the elements spread between the nine Inria locations? After a rapid evaluation, the choice was made to use private IPv4 addressing RFC1918 associated with L3VPN Renater routing. First, we will address the initial issue and the choice in terms of both IP addressing and flow routing mechanism. After explaining this choice, we will look at what an L3VPN service on Renater is and how to connect it. The key point of the solution is the development of a WAN routing policy between the different connected sites. We will then discuss the functionalities put in place to ensure these flow exchanges via the double WAN connections of each Inria site: * implementation of BGP peerings and local redistribution of L3VPN IPv4 Inria prefixes * routing policy specific to the flows to be processed via L3VPN (Policy Based Routing) * management of the resilience between WAN access: dual homing, rapid convergence. Lastly, we will look at the other use cases of this L3VPN service, envisaged to be scalable from the outset.
Biographie de l'auteur
Jérôme BERTHIER a commencé sa carrière en 2002 comme exploitant réseau au sein du groupe La Poste. L’équipe de production dont il faisait partie avait en charge l’exploitation du réseau WAN et des accès sécurisés du groupe. En 2008, il a rejoint le Service des Moyens Informatiques du centre de recherche Inria Bordeaux - Sud-Ouest en tant qu'ingénieur système et réseau. Il a pu accompagner l'installation et la croissance de l'infrastructure de ce centre de recherche nouvellement créé. En 2011, il a ensuite intégré la DSI Inria pour intervenir sur l'ensemble des infrastructures réseau de l'institut. Il a notamment piloté les évolutions WAN et pare-feux des centres de recherche et du site d'héberment mutualisé. A ce jour, Jérôme est architecte réseau au sein du service DSI "Conception d'infrastructure". Au côté des autres architectes, il pilote l'étude et la mise en œuvre des évolutions d’infrastructure informatique notamment réseau et sécurité. Il est aussi Correspondant pour la Sécurité du Système d'Information (CSSI) depuis de nombreuses années.
51 - Geolocated monitoring of a firewall pool with check_mk
Jonathan ChatriotAbstract
The firewall installed base that we manage is spread over a large geographical area. The use of GPS coordinates seemed essential in order to accurately locate all these devices on a geographical map so that we can intervene quickly in the event of a failure. The check_mk tool, already presented at the 2015 JRES, is a powerful monitoring tool based on the Nagios core. Its performance makes it possible to monitor a number of services in a very short time. With several years' experience of this tool in its free version, we were able to exploit its features to meet our needs. By using simple tools known to the community, we were able to respond to our problem of location. The GLPI modules, the fusioninventory plugin and the government’s OpenData platform helped us create our reference base. Using this repository, we can automatically generate the configuration of our Check_mk monitoring tool. To refine the display, the Nagvis plugin integrated in check_mk offers us a summarised geographical view thanks to its “worldmap” based on “OpenStreetmap”. We will detail the entire process, from the inventory of the firewall installed base to the generation of the monitoring tool configuration and the associated geographical map. We will see that, despite a large number of items of equipment that need to be managed, it is quite possible to obtain an effective monitoring result in a very short time. We will conclude the presentation with a quick review of the use of the free Check_mk tool and its daily added value for the various teams in the Information Systems Department.
Biographie de l'auteur
Après 9 années passées en tant qu'ingénieur systèmes et réseaux au sein de l'ESR à l'Ecole Centrale de Lille , Jonathan Chatriot a rejoint le Rectorat de Lille en 2014. Actuellement responsable de l'équipe réseau et sécurité au sein de la Direction des Systèmes d'Information, il administre l'ensemble de l'infrastructure réseau du Rectorat. Jonathan est également membre du comité de pilotage du réseau métier régional des Hauts de France Min2rien depuis sa création en 2010. L'échange et le partage de connaissances sont des valeurs essentielles pour lui comme l'illustre sa proposition aujourd'hui.
53 - White-box
Xavier Jeannin - Alain Bidaud - Sébastien Vigneron - Maxime Wisslé - Eric Lachey - Edin Salguero WellmannAbstract
White box, what is it? Yet another buzzword? The word “white” refers to the notion of “unbranded” as opposed to a “branded box”. White boxes are simply switches/routers capable of running multiple network operating systems (NOS). The consequence is that the active network hardware can be managed independently of its NOS. Are we in the same situation in terms of network as when Linux made its appearance in the UNIX world? White boxes offer impressive forwarding capabilities at a very competitive price. So why continue to use traditional routers? Some white boxes offer even more independence and freedom because their data plane can be programmed using a P4 language, which provides a high level of abstraction. It is now possible to create your own router relatively simply, but also all kinds of active network equipment such as tools against DDoS attacks, monitoring without sampling. P4 is a great tool for education and research in the field of networks. Our project is taking place in a European context, in collaboration with the SYVIK regional network and the Normandy Region. It covers practical white box applications in production for several use cases (CPE, switch/router for Internet exchange point, Data centre network). It is therefore necessary to examine the management of switches/routers (the management layer, security, monitoring, automation), the support environment (hardware and software maintenance model, documentation, community), but also the cost aspect, particularly the overall cost of ownership.
Biographie de l'auteur
Alain BIDAUD à débuté sa carrière en tant qu'ingénieur réseau. Au début des années 2000, il a participé à plusieurs projets européens au sein des groupes de travail TF-NGN, en particulier sur le déploiement de la technologie MPLS. Il est actuellement directeur technique du Centre Régional Informatique et d'Applications Numériques de Normandie (CRIANN). Il a en charge avec son équipe du maintien et de l'évolution du réseau régional normand SYVIK, ainsi que des moyens de calcul infensifs haute-performance. Eric Lachey a réalisé le maintien en condition opérationnelle de systèmes informatique (IBM, BULL, NCR) pour le compte de grandes structures (SNCF, Brittany ferries etc.). Depuis une vingtaine d’années, il porte pour le Conseil Régional de Normandie des projets d’infrastructures innovants en partenariat avec d’autres acteurs régionaux. Le projet U-CPE est une opportunité pour la collectivité de fournir pour ses compétences propres (les lycées) des services et fonctionnalités optimisés pour le raccordement au réseau régional SYVIK. Sébastien Vigneron a obtenu son master sécurité des systèmes d'information en 2009. Il a réalisé son stage de fin d’étude pendant 6 mois au sein de l’équipe technique du CRIHAN (CRIANN depuis 2016) au cours desquels il a participé à une étude et la mise en place de la virtualisation au CRIHAN. Dès la fin de son stage en septembre 2009, il a rejoint l'équipe technique du CRIHAN en tant qu’ingénieur système et réseau. Il contribue à l’exploitation des clusters de calcul, des grappes de stockage, la conception et le maintien opérationnel des serveurs et services. Il travaille également sur divers projets, parmis lequels la conception et mise en place du projet White-box CPE Normandy en partenariat avec GÉANT, le GIP RENATER et la Région Normandie. Maxime Wisslé a obtenu son master réseaux informatiques et systèmes embarqués en 2018. Il a réalisé son stage de fin d’étude pendant 6 mois au sein de l’équipe « Programmes, Projets Transverses et Innovation (P2TI) » au GIP RENATER au cours desquels il a participé à une étude sur le concept des « White Box » En août 2018, il a rejoint l'équipe « Production des Services aux Utilisateurs » en tant qu’ingénieur réseau. Il contribue à l’exploitation des réseaux backbone et d’accès. Il continue également de travailler sur divers projets, parmi lesquels l’étude des « White Box » en corrélation avec GÉANT et l’étude de la stratégie internet sur la partie production. Xavier Jeannin a commencé par travailler dans un laboratoire de mathématique puis pour le CNRS dans un laboratoire de science cognitive, de biologie et à l’unité réseau du CNRS (UREC). Il travaille maintenant à RENATER sur l’innovation et les projets Européens. Il travaillé sur les réseaux pour les grilles de calcul comme work package leader pour le projet EGEE et comme task leader dans plusieurs projets GÉANT. Il est actuellement task leader de l’activité « Network Technologies Evolution » qui couvre les thèmes: service temps & fréquence, chiffrement quantique, infrastructure à faible latence, transfert de données, data plane programming, Router for Academia, research end Education et white-box.
55 - The Unistra Data Center network
Christophe Palanché - Alain Zamboni - Fabrice Peraud - Oumar NianeAbstract
The University of Strasbourg (Unistra) has just achieved the construction of a Data Center, with a surface area of 450 m2, divided into four server rooms with 27 bays each, as well as an operator room with 12 bays. For the network urbanisation of the latter, the DNum has chosen to deploy a new autonomous network, based on Data Center technologies. This network is based on a Spine/Leaf architecture and provides an underlay/overlay separation using eVPN/VXlan protocols. This model was implemented on Arista equipments selected after a call for tenders. The dual connection of all these devices allows this network to meet high-availability constraints. It is also linked to the university's Disaster Recovery Plan (DRP) site, currently located in Strasbourg. The architecture choices allow its relocation to another Higher Education and Research Data Center, if necessary. The creation of this new network was also an opportunity to deploy state-of-the-art administration and monitoring tools. The poster will present the components of the Data Center network : * the fabric architecture; * the technologies and equipment used; * the tools used for deployment and operation : Zero Touch Provisioning (ZTP), configuration API, telemetry, configuration backup; * examples of use cases : extension of a customer network in a rental bay, interconnection of a redundant firewall with announcements of its BGP routes.
74 - eduVPN - Secure access to internet
Anass ChabliAbstract
eduVPN is a new Open Source service in the pilot phase within the GIP RENATER, rolled out as part of our participation in the GEANT international projects. It was initially supported and developed by the Dutch national research network in 2015 (commonly known as NREN), which offers the service in production for their community. To date, eight other countries are in the pilot phase. It enables the establishment of a secure VPN connection for members of the French Federation “Fédération Éducation-Recherche” using different media (Windows, Linux, iOS, MacOS, Android). Users can thereby access the Internet or their establishment's private network without the fear of prying eyes. eduVPN can be declined under two options: 1. An instance managed by RENATER and offered to its members, with the aim of enabling a certain population (students, employees or researchers) to connect securely to the Internet using their institutional accounts, via their federated identities. In addition, eduVPN is a collaboration between different NRENs. You can therefore use eduVPN servers from other countries, in the same way as you can use Eduroam internationally. 2. An instance rolled out and managed by the institution to access its private network and internal applications in a secure way, thereby replacing or supplementing existing VPN solutions. We will present the various technical blocks of the service, and address the processes for setting up the service at the organisational (governance, charter, etc.) and technical (automated deployment, etc.) levels.
Biographie de l'auteur
Anass Chabli, fait partie de l'équipe Fédération d'Identités qui opère l'infrastructure nationale de Fédération mise à disposition par RENATER pour la communauté Education-Recherche. Il participe à la conception et l'implémentation de solutions d'authentifications et d'autorisations dans un contexte de fédération d'identités, aussi bien au niveau national qu'international. C'est en effet, suite à sa participation aux activités de recherches niveau européen via GÉANT, qu'il présente aujourd'hui le projet "eduVPN".
93 - Migration of the infrastructure of an IPv4 / IPv6 dual stack facility
Jérôme Berthier - Guillaume CassonnetAbstract
Specified in 1998, the IPv6 protocol is the alternative to the IPv4 protocol whose address space has reached saturation point. Since 2017, all regional IP address allocation (RIR) registers have started their last IPv4 /8 block. This situation is coming up against an exponential growth in connectivity needs, particularly for the development of the IoT. For several years now, major content providers (Google, Facebook, etc.) have been using dual stack IPv4/IPv6 address systems. Similarly, the majority of fixed and mobile Internet service providers offer a transparent dual stack IPv4 / IPv6 address system to their subscribers. To ensure the connectivity of our users and services published on the Internet, it has become strategic to process the IPv6 protocol in our IT infrastructure. Far from being the first to implement this change, this presentation illustrates the process followed to integrate IPv6 at the same level as IPv4, specifically on the Inria Bordeaux - Sud-Ouest site. Initially, we will present the preparation of the network and the associated security to process the IPv6 protocol: addressing plan, routing, filtering and first hop security on switches. We will then present the technical stages of migration to connect services to IPv6 and to ensure end user connectivity. Where to start? In what order should you switch? Which services? Finally, we will draw up a rapid assessment of this migration. Spoiler, it works pretty well!
Biographie de l'auteur
Jérôme BERTHIER a commencé sa carrière en 2002 comme exploitant réseau au sein du groupe La Poste. L’équipe de production dont il faisait partie avait en charge l’exploitation du réseau WAN et des accès sécurisés du groupe. En 2008, il a rejoint le Service des Moyens Informatiques du centre de recherche Inria Bordeaux - Sud-Ouest en tant qu'ingénieur système et réseau. Il a pu accompagner l'installation et la croissance de l'infrastructure de ce centre de recherche nouvellement créé. En 2011, il a ensuite intégré la DSI Inria pour intervenir sur l'ensemble des infrastructures réseau de l'institut. Il a notamment piloté les évolutions WAN et pare-feux des centres de recherche et du site d'héberment mutualisé. A ce jour, Jérôme est architecte réseau au sein du service DSI "Conception d'infrastructure". Au côté des autres architectes, il pilote l'étude et la mise en œuvre des évolutions d’infrastructure informatique notamment réseau et sécurité. Il est aussi Correspondant pour la Sécurité du Système d'Information (CSSI) depuis de nombreuses années.
96 - Using a Linux server instead of a router: The case of Grid'5000 gateways
Simon Delamare - Lucas Nussbaum - David Loup - Dimitri DelabroyeAbstract
Grid'5000 is an infrastructure for research in all areas of distributed computing (Clouds, network, HPC, etc.). The platform consists of approximately 700 nodes, made available to researchers, distributed over eight sites interconnected by a 10Gbits/s network provided by Renater. Grid'5000 is interconnected with the Internet using two routers. Until 2018, traditional network equipment was used, but performance of these routers were not adequate to meet the needs of researchers, who download increasingly large data from the Internet (Docker images, large datasets, etc.) Download rates were limited to 150 Mbits/s, far from the Gigabit/s they are supposed to reach. As the estimated costs for replacing them with more efficient equipment were too expensive (between €29k and €38k), we decided to exchange the current equipment for standard Linux servers, operating Debian, at a lower cost. It was a successful gamble: the performance is convincing, we filled the 1G link without any problems and since then we have even switched our Internet access to 10Gbits/s successfully. In addition, using a Linux system simplifies the equipment configuration: management via Puppet, implementation of a proxy cache, development of new network services for researchers, etc. We are therefore giving you a feedback on this migration to Linux servers. We will present the hardware and software configuration used, how the routing and filtering functions is performed, the services we set up that could not have been possible on traditional equipment, as well as the performance achieved.
Biographie de l'auteur
David Loup est ingénieur à Inria au sein de l'équipe de recherche AVALON, au Laboratoire de l'Informatique du Parallélisme à l'ENS de Lyon. Il est administrateur systèmes et réseaux sur la plateforme d'expérimentation Grid'5000.
100 - DNA, the SDN SDA campus solution from Telecom-Paris
Christophe MassonAbstract
With the proliferation of services offered to users and increased demand for security, modern networks have become much more complex than in the past. These networks require high security and efficient flow analysis methods, plus flexible development and deployment. The move by the Institut Mines Telecom to the Saclay Plateau gave us the opportunity to question the direction our new network should take. We had many requirements, and set out to find a solution that would deliver a robust, monitored and resilient secure network. We therefore opted for the Cisco DNA solution. This solution, based solely on level 3, enables macro and micro segmentation and therefore a high level of robustness and security, which is improved by a system of rules between user groups. The 802.1X supported by the DNA solution allows flexible management. Finally, DNA has many analytical tools that enable rapid, accurate diagnosis of anomalies. The presentation will focus on the key concepts of Cisco’s DNA solution and the benefits of this solution. We will also discuss our feedback on this new technology.
105 - Automate your network
Jérôme DurandAbstract
The statistics are frightening: 95% of changes made to networks are done manually. Setting aside the incredible costs associated with these operations, this also has a very strong impact on network security, as nearly three quarters of security incidents are related to configuration errors. At a time when applications are highly automated, the network world may seem a little prehistoric. Yet there are many solutions that allow you to take the plunge and program your network like any other app. Although manufacturers offer turnkey solutions, it is also possible to build your own fully customised automation environment, fully adapted to the existing infrastructure. In this case, the main risk is getting lost in the technical details, rather than the actual business issues. Which topologies are conducive to simplified mass automation? Which languages are suitable for programming network equipment? Which configuration and verification tools can be used? This presentation aims to provide the key elements to enable each network administrator to take the plunge and move slowly but surely towards automation of their network.
Biographie de l'auteur
Jérôme a rejoint l'équipe R&D du GIP RENATER en 2002. Il a contribué à plusieurs projets sur IPv6. Il a notamment initié le déploiement du M6Bone, un réseau de test mondial pour le multicast IPv6. Il a également participé activement au projet 6Net, un réseau de test grandeur nature permettant de valider le fonctionnement en production du protocole IPv6. En 2006 Jérôme est devenu responsable des opérations de RENATER. Il a coordonné le déploiement de RENATER-5 et a apporté de nombreuses améliorations au réseau RENATER (convergence, gestion de la redondance, multicast...) En 2009, Jérôme est devenu responsable des services aux usagers de RENATER, et a ainsi contribué à ouvrir la voie à une offre globale de services sur RENATER, au-delà de la connectivité réseau. Jérôme a rejoint Cisco en 2011 comme expert sur les technologies de routage et de commutation. Il travaille avec ses clients et partenaires à l'élaborations de designs mettant en oeuvre des technologies réseau avancées: SDN, IPv6, multicast, BGP, MPLS... Il communique régulièrement sur les dernières avancées technologiques de Cisco sur son blog reseauxblog.cisco.fr ainsi que dans diverses conférences. Jerome contribue également à l'IETF: il est auteur du RFC 7454 - BGP Operations and Security et travaille sur d'autres propositions actuellement. Dernièrement Jérôme s'est très impliqué sur la programmation et l'automatisation des réseaux et notamment les solutions SD-WAN et SD-Access. Il est depuis 2018 leader technique en France sur les réseaux d’entreprise.
112 - Feedback from a LoRaWAN deployment at Strasbourg
Guillaume SchreinerAbstract
Unlike Wi-Fi or Zigbee, which are based on radio technologies with a range of several dozen metres, the LoRaWAN standard approaches distances of more than 10 km. These performances fundamentally call into question the previous paradigm of short-range meshed networks requiring several dozen intermediate nodes to cover the same radio distance. The Inetlab platform of the ICube laboratory (UMR CNRS/University of Strasbourg) is designed to provide equipment and software for experimentation with new Internet of Things technologies. In 2016, an original experiment began aimed at deploying a complete LoRaWAN infrastructure (hardware and software) at city level, combining ICube with the Strasbourg Eurométropole and the start-up Strataggem. The multiple aims of this deployment are to assess the operational potential of this standard, the emergence of new applications in the context of smart cities, the benefits of sharing antennae between several local stakeholders to increase radio coverage, etc. In this article, we will first present the LoRaWAN standard in the context of the IoT, specifying its theoretical advantages and disadvantages. Secondly, we will share our feedback on the architecture of the deployed LoRaWAN infrastructure, from the creation of connected objects to integration in the information system, via the examination of observed performances. Finally, we will illustrate the tremendous potential of connected objects through the use cases deployed, which apply both to laboratory research activities and to the business applications of the different technical departments of the University or local authorities.
Biographie de l'auteur
Guillaume Schreiner travaille à l'Université de Strasbourg depuis 2005. D'abord au Centre Réseau Communication, opérateur du réseau Osiris, il a participé au projet IPv6-ADIRE, un projet de la Direction de la Recherche et l'Enseignement Supérieur, visant à promouvoir le déploiement d'IPv6 dans les universités. En 2006, il rejoint le LSIIT, pour collaborer à différents projets ANR (Airnet, SensLab) dans des domaines comme la mobilité IPv6, les réseaux Wi-Fi, les réseaux de capteurs. En 2009, il obtient le concours d'ingénieur d'études ITRF pour travailler au département Infrastructure de la Direction Informatique de l'Université de Strasbourg. Il participe à différents projets d’envergure tels que le déploiement du réseau métropolitain Osiris 3 ou l’outil collaboratif de messagerie SOGo à l’échelle de l’Université. En 2012, Guillaume Schreiner retrouve le laboratoire ICUBE et le CNRS pour devenir le responsable technique de la plate-forme Inetlab dédiée aux expérimentations des nouvelles technologies de l’Internet. En particulier, il participe au développement et à l’exploitation de l’Equipex FIT IoT-LAB qui propose un outil d’expérimentations reproductibles à large échelle pour l’Internet des Objets.
117 - Management and monitoring of optical transport networks
Emilie Camisard - Moufida FeknousAbstract
Optical fiber and Wavelength Division Multiplexing (WDM) are the main methods supporting broadband services used in the Education and research community. This kind of network requires an in-depth analysis and constant monitoring in order to ensure high availability. We shall be presenting Renater’s experience feedback, with analysis methods and operating procedures allowing an immediate identification and failure resolution in optical transport network. After a brief presentation focused on various components of optical transmission, we will describe the key parameters used to control and maintain optical network functioning, as well as the required alarms for incident diagnosis and degraded services identification. Technical documentation, available procedures and service commitments play also a critical role in incident resolution time. Optical powers, alarms and logs are stored and monitored via network managers and hypervisors like Solarwinds, and depicted operably and synthetically. The most prevalent types of incidents in optical networks are caused by optical fiber link issue. They are identified by fiber operator by using reflectometers and photometers tools. A few measurements examples will be presented. Finally, we will present the potential areas of improvement that we have identified: proactive (regular preventive maintenance of each equipment and servers state monitoring), but also documentary (reference files of optical parameters).
Biographie de l'auteur
Emilie Camisard travaille au GIP RENATER depuis 2004. Pendant cinq ans au sein de l’équipe « Services IP avancés et prospective », elle a participé à des activités de recherche et des projets d’étude sur les technologies optiques. De 2009 à 2018, elle a contribué à l’ingénierie et au déploiement des versions 5 et 6 du backbone DWDM de RENATER tout en étant la référente du projet REFIMEVE+ (Réseau Fibré Métrologique à Vocation Européenne) chez RENATER. En 2018, elle a intégré le pôle « Production des Services aux Utilisateurs » où elle suit l’exploitation des réseaux backbone et d’accès et en contrôle les performances. Moufida Feknous a obtenu son diplôme de doctorat en 2015 de l’université de Rennes. Ses travaux de recherche portent sur l’analyse, l’optimisation de l’agrégation dans les réseaux d’accès optiques de demain, la convergence des réseaux fixes et mobiles et la modélisation de trafic. Sa thèse de doctorat a été réalisée au sein du groupe Orange Labs et en collaboration avec l’IMT atlantique (Télécom Bretagne). Avant de rejoindre IMS NETWORKS fin 2015, elle a passé presque un an chez Bouygues télécom où elle était la référente technique des réseaux d’accès optiques. Elle a étudié les différentes architectures des réseaux d’accès optiques et elle a proposé des indicateurs de performance pour améliorer l’exploitation des réseaux d’accès optiques. Elle est actuellement Ingénieur de transport optique chez IMS NETWORKS, elle contribue à l’ingénierie, déploiement et l’exploitation des réseaux optiques DWDM.
124 - Internet RENATER peering strategy
Maxime Wisslé - Fréderic LouiAbstract
The Internet service is now a popular service of convenience for the entire Education and Research community. In order to provide this service, which is essential in nature, RENATER operates a “Peering” strategy to optimise its cost. RENATER provides the community with connectivity to the rest of the world and, to do so, takes out subscriptions with TELECOM transit operators. There are two of these, one in the south and one in the north of France. This technical choice has advantages (geographical and technical resilience), but also drawbacks, especially budgetary ones, as the pricing applied by these operators is based on bandwidth consumption. For this reason, this model is the subject of constant study aimed at optimising flows (reduction in latency, increase in bandwidth) and reducing the associated financial cost. This typically consists in concluding peering agreements with entities (companies, operators, etc.) with a view to establishing direct connectivity to the services offered by these entities rather than going through a third-party supplier. “Peerings” are set up either directly from equipment to equipment between two entities or via a dedicated architecture for the exchange of routes and traffic between several entities, called “Global Internet eXchange (GIX)”. The article presents the general principles related to the peering strategy and the policy implemented within RENATER.
Biographie de l'auteur
Maxime Wisslé a obtenu son master réseaux informatiques et systèmes embarqués en 2018. Il a réalisé son stage de fin d’étude pendant 6 mois au sein de l’équipe « Programmes, Projets Transverses et Innovation (P2TI) » au GIP RENATER au cours desquels il a participé à une étude sur le concept des « White Box » En août 2018, il a rejoint l'équipe « Production des Services aux Utilisateurs » en tant qu'ingénieur réseau. Il contribue à l’exploitation des réseaux backbone et d’accès. Il continue également de travailler sur divers projets, parmi lesquels l’étude des « White Box » en corrélation avec GÉANT et l’étude de la stratégie internet sur la partie production.
127 - Why I killed my copper - Hightlights about the FTTO in the ESR
Gabriel Moreau - Bernard Maire-amiot - David Gras - Hervé Colasuonno - Julien Bamberger - Aurélien Minet - Alain Péan - Marie DejeanAbstract
FTTO means Fiber To The Office, in reference to FTTH (Fibre To The Home), deployed in France for individuals. The principle of FTTO is to cable a building totally in fibre optic, to remove as much copper cabling as possible and install microswitches in each office (duct or adjacent), as near the machines as possible. Users are still connected with standard RJ45 copper wiring. Through questions and answers, we will highlight the reasons why FFTO is a controlled and future-oriented technology. Over the last six years, several building projects within the perimeter of Higher Education and Research have chosen this technology and have seen or will see the light of day. Depending on the project, different topologies and technologies are possible. What is the feedback after these years? Is the result as expected? How is the solution experienced on a day-to-day basis? What security, how is a large switch assembly configured and maintained, what high availability is possible? How is Wi-Fi, IP telephony and all PoE devices integrated? Does FTTO contribute to eco-consumption? How can a FTTO call for tender be set up for a project, what are the essential elements to be included and what are the errors to be avoided at all costs? In the future, what is the life expectancy for its infrastructure and what speeds can be envisaged? The RESINFO FTTO Group is working to provide clear answers to all these questions and to share its experience with the community.
131 - The OpenFlow FAUCET controller
Marc Bruyere - David Delavennat - Brad Cowie - Josh BaileyAbstract
It is clear that, despite the promise of “Software Defined Networking” (SDN) with OpenFlow, there did not seem to be any genuinely deployable and usable controller without a major programming effort. Published just over ten years ago by researchers, the paradigm shift brought about by the SDN has been followed by the industry with Ryu, OpenDayLight, or ONOS. However, besides merely being frameworks for developing a controller, these have only rarely been used to manage local campus or corporate networks. In this paper we will introduce FAUCET which is a compact, open source OpenFlow controller that allows administrators to operate their networks in the same way they operate server clusters. Since FAUCET includes all the normal functionality used in enterprise networks (switching, routing, LACP, 802.1x, etc), deploying on a network does not require any specific development work to tailor for the deployment. In fact all that is required is writing a simple YAML configuration file defining the topology and functionality of the network. FAUCET allows operators to have clear visibility over the network by integrating with Prometheues and using the Grafana dashboard tool. Several organisations are using FAUCET, including one of the founding organisations behind faucet, The University of Waikato, which has for several years been operating part of its network using equipment from four different manufacturers. This paper will remind us of the foundations of OpenFlow SDN, give an overview of the FAUCET architecture, explain how to deploy a network with it and finally present feedback.
Biographie de l'auteur
Marc BRUYERE a débuté sa carrière en 1996 chez Club-Internet.fr, puis chez Cisco, Vivendi Universal, Credit Suisse First Boston, Airbus/Dimension Data, Force10 Networks et Dell. Il a obtenu son doctorat au CNRS-LAAS, suivi d'un post-doctorat de deux ans à l'Université de Tokyo. Sa thèse porte sur l'Open Source OpenFlow SDN pour IXPs. Il a conçu et déployé le premier OpenFlow IXP européen pour le TouIX. Aujourd'hui, il est chercheur senior à l'Internet Initiative Japan Lab. David DELAVENNAT a d'abord travaillé pour Infoconseil/Infopoint puis Silicomp Réseaux avant de rejoindre le CNRS en 2003. Il travaille actuellement dans une Unité Mixte de Recherche INSMI - École polytechnique (Institut Polytechnique de Paris). Il y est expert en ingénierie des Systèmes d'Informations et s'intéresse à tous les sujets liés à la sécurité, la reproductibilité et à l'automatisation des infrastructures, du Datacenter à l'Application (Chiffrement, MFA, HPC, IaaS, PaaS, FaaS, SDN, SDS, GitOps, CI/CD, Micro-Services, Conteneurs, Ordonnanceurs...). Brad COWIE Brad est membre du groupe de recherche du réseau WAND de l'Université de Waikato. Il est également un membre clé du projet FAUCET. Fort de ses années d'expérience dans le déploiement de serveurs et des réseaux, il effectue des déploiements SDN dans le monde entier avec FAUCET. Josh BAILEY SDN et C64 software développeur et chercheur - https://www.vandervecken.com
136 - Network Function Virtualization
Jérôme DurandAbstract
Network infrastructures require the implementation of more and more different functions: routing, firewall, IPS/IDS, proxy, optimisation, performance measurement, etc. Although manufacturers are offering ever more complete software, boxes are piling up in Data centres and remote sites, resulting in complexity, costs and a rigidity that is incompatible with the expected flexibility and speed of execution. One option that is increasingly being considered is to virtualise these network functions. Instead of deploying multiple boxes, an x86 server will be installed (or several if redundancy required) and the expected functions will be virtualised on this infrastructure. This is called NFV - Network Function Virtualisation. What are the challenges associated with this virtualisation? How can performance be guaranteed in a virtualised environment? How can virtual network functions be created and deployed? All of these questions will be addressed in the presentation “The background to virtualisation of network functions”.
Biographie de l'auteur
Jérôme a rejoint l'équipe R&D du GIP RENATER en 2002. Il a contribué à plusieurs projets sur IPv6. Il a notamment initié le déploiement du M6Bone, un réseau de test mondial pour le multicast IPv6. Il a également participé activement au projet 6Net, un réseau de test grandeur nature permettant de valider le fonctionnement en production du protocole IPv6. En 2006 Jérôme est devenu responsable des opérations de RENATER. Il a coordonné le déploiement de RENATER-5 et a apporté de nombreuses améliorations au réseau RENATER (convergence, gestion de la redondance, multicast...) En 2009, Jérôme est devenu responsable des services aux usagers de RENATER, et a ainsi contribué à ouvrir la voie à une offre globale de services sur RENATER, au-delà de la connectivité réseau. Jérôme a rejoint Cisco en 2011 comme expert sur les technologies de routage et de commutation. Il travaille avec ses clients et partenaires à l'élaborations de designs mettant en oeuvre des technologies réseau avancées: SDN, IPv6, multicast, BGP, MPLS... Il communique régulièrement sur les dernières avancées technologiques de Cisco sur son blog reseauxblog.cisco.fr ainsi que dans diverses conférences. Jerome contribue également à l'IETF: il est auteur du RFC 7454 - BGP Operations and Security et travaille sur d'autres propositions actuellement. Dernièrement Jérôme s'est très impliqué sur la programmation et l'automatisation des réseaux et notamment les solutions SD-WAN et SD-Access. Il est depuis 2018 leader technique en France sur les réseaux d’entreprise.
143 - The new Osiris Metropolitan Network for the Higher Education and Research in Strasbourg
Sébastien Boggia - Jean Benoit - Cédric Freyermuth - Oumar Niane - Christophe PalanchéAbstract
Osiris is the Strasbourg metropolitan network of Higher Education and Research operated by the University of Strasbourg. It connects 140 buildings distributed among 17 partner institutions. Currently in its third version, Osiris is based on a classic VLAN transport and centralised routing architecture. The deployment of Osiris 4, its successor, is planned for the end of 2019. We decided to initiate a real technological breakthrough by establishing an abstraction layer between the transport network (underlay) and services (overlay). The expected gains, on a 100Gbps core, are to increase reliability, performance and ease of operation while offering new services. Our approach consisted in studying the different products and technologies on the market (MPLS, EVPN/VXLAN, LISP/VXLAN, SPB) with in-depth tests in order to find the solution best suited to our technical and financial needs and constraints. After making the final choice of technology, our next steps will be to plan out the migration operations and build the tools necessary to operate the network.
164 - perfSONAR: recent developments and prospects
Antoine DelvauxAbstract
perfSONAR, the multi-domain network performance monitoring toolkit, has over 2000 public instances around the world. It is continually evolving to meet the needs of Research and Education Networks (REN). Since 2016 pScheduler, the performance measurement coordinator, is at the heart of its architecture. 2018 brought pSconfig, the perfSONAR instances orchestrator. 2019 provides even more flexibility and reliability in tasks management and some new plugins, for example to measure disk-to-disk performance. The GÉANT project, thanks to its expertise as one of the first perfSONAR collaboration partners, provides REN with two services. The first, named PMP for Performance Measurement Platform, deploys and manages more than 30 nodes across the GÉANT's various partner networks. It enables participants to test drive perfSONAR themselves while at the same time measuring the actual performance of the different GÉANT access links. The second, Consultancy & Expertise, is ready to give tailored advice and guidance in defining and establishing a set of nodes, a performance monitoring plan and the dashboards needed to follow network performance in day to day operations. The current developments and innovative uses of perfSONAR will be presented: - perfSONAR and network devices, - virtual circuit monitoring (VPN/VRF/LNNS), - latency and jitter measurements for the LoLa project, - performance monitoring for cloud services, - upcoming technology changes in the perfSONAR architecture (backend and frontend).
Biographie de l'auteur
Antoine Delvaux a un diplôme d'Ingénieur Civil en Informatique de l'Université de Liège, Belgique (1999). Il est impliqué dans le projet perfSONAR depuis 2008. Il travaille pour PSNC (Poznań Supercomputing and Networking Center) et est le perfSONAR Service Manager du projet GÉANT. Il participe depuis plusieurs années à de nombreuses activités de monitoring des réseaux et des services sur le plan opérationnel ainsi que dans le développement logiciel pour plusieurs (N)REN (Belnet, Dante/GÉANT, PSNC, WACREN, International Networks @ Indiana University),
3 - ESUP-SGC, Custom Card Management System for Higher Education and Research.
Vincent Bonamy - David Lemaignent - Jean-pierre TranAbstract
ESUP-SGC is an open-source multi-service Card Management System written for and by the world of Higher Education and Research. For the past two years, ESUP-SGC has managed more than 100,000 cards on a daily basis in nine Normandy facilities, around a joint multi-establishment project called "Léocarte". Much more than a simple system for printing (printing and encoding) cards, ESUP-SGC is strongly integrated into the Information System of the establishment, for which it is a core element. The product therefore offers the possibility of synchronous updating of the Information System: LDAP directories (Supann), access control systems, CROUS/IZLY services, European Student Card (ESC), printing management system, library loans, etc. Free (Apache V2 licence) and developed by the consortium EsupPortal through the University of Rouen Normandie, the establishments are free to add functionalities to the code of ESUP-SGC to adapt it even more specifically to their context of use. Equipped with a varied, modern and responsive web back-office for administrators and managers, ESUP-SGC also offers a web view to any end user: student, staff, guest, etc. but also externally (RENATER identity federation). This article and presentation will therefore focus on the motivations of this development, the functional coverage and the technical aspects that make ESUP-SGC a customized SGC for Higher Education and Research, and therefore a credible solution to the proprietary software on the market!
Biographie de l'auteur
Vincent Bonamy est Ingénieur de Recherche dans le pôle Etudes et Développement de la DSI de l'Université de Rouen Normandie. Coordinateur et contributeur actif du consortium EsupPortail, il est responsable d'un certain nombre d'applications libres poussées par EsupPortail. Il est notamment l'auteur du logiciel libre EsupDematEC, utilisé par près de 50 établissements ; ce logiciel permet de dématérialiser la procédure de recrutements des Enseignants-Chercheurs et ATER. En 2017, aidé de 2 développeurs, David Lemaignent et Jean-Pierre Tran, il initie, développe et déploie un Système de Gestion de Cartes sans contact : Esup-SGC. Depuis, Esup-SGC est utilisé en production par les établissements normands et est disponible librement à tous depuis le site www.esup-portail.org
9 - How does fediverse work?
Stéphane BortzmeyerAbstract
The "fediverse" (portmanteau word, from "federation" and "universe") is the set of decentralized social networks, managed independently, but exchanging messages. The last two years, it gathered a lot of interest, and many programs now connect to the fediverse (Mastodon, Pleroma, PeerTube, FunkWhale, WriteFreely, PixelFed, Mobilizon...) The decentralized social networks are a hot topic currently, specially giving the bad practices of the GAFA. Because of censorship and mishandling of personal data, a lot of people are looking for alternatives. How does the fediverse work? One can often read "all these programs talk the ActivityPub protocol". But ActivityPub is actually only a part of what you need to "talk fediverse". There are other protocols. Some are very generic (ActivityPub), some are underspecified (authentication on the fediverse). In its current state, the fediverse is not well documented and good explanations are scarce. Many developers had to read the source code of the other programs to understand what they were supposed to do. We will therefore explain how the fediverse work, under the nice Web interfaces. We'll talk about ActivityPub, but also about the ActivityStreams format, the Webfinger protocol, the idea of "linked data", the JSON-LD format, the HTTP signatures, and everything you need to "talk fediverse". This talk will be technical, although the social aspects of the fediverse are also raising a lot of issues.
Biographie de l'auteur
L'auteur travaille à l'AFNIC, registre des noms de domaine en .fr. Il s'occupe de DNS, de normalisation technique, et de veille technologique (IA agile sur smart blockchain quantique). Il est l'auteur d'un livre sur les rapports de l'infrastructure de l'Internet avec la politique (« Cyberstructure », https://cyberstructure.fr) et d'un blog parlant de réseaux informatiques (https://www.bortzmeyer.org/).
21 - NextCloud : Private Cloud service for the 5000 Universite Grenoble Alps employees
Guenael SanchezAbstract
Grenoble Alpes University notes the increasing use of data storage tools in the "cloud" offered by private, often American companies. Following a request from certain researchers and administrative staff wishing to benefit from such a tool hosted and administered locally, an experiment on the OwnCloud tool was launched six years ago. With nearly 400 users for about 1 Terabyte of data, we wanted to move up a level: Offer the University's 5,000 employees a "cloud" storage space meeting the following requirements: * Data hosted at the UGA; * Quota sufficient to "compete" with private tools; * Ease of use and access; * Collaborative tools, particularly when compared to the Google suite. We chose NextCloud & OnlyOffice software, offering 50 Gigabytes of space per user. Using the Grenoble virtualisation platform (WINTER) and the hosting of Grenoble data (SUMMER), the solution is designed to be modular and "scalable" to monitor changes in the number of users. After a year of testing with 1,200 users for approximately 10 Terabytes of data stored, the solution is now in production and effectively meets the various needs expressed. On one hand, researchers wishing to synchronise their data between several workstations, and also administrative staff on mobile workstations wishing to access and simply save their data. You can see our technical architecture choices on our poster, as well as the different use cases.
22 - MyToutatice: Putting SelfData on the VLE
Olivier Adam - Sophie Schaal - Yannick Bré - Thierry Joffredo - Annabel Bourdé - Albane GuihomatAbstract
But where is my data? Staff, students and parents are offered a range of digital services by the Ministry, the Regional education authority, the upper/lower secondary school and the local authorities. This offer is united in a digital workspace that complies with the master plan for virtual work environments. Users are continually learning and teaching on the move: in the establishments, on public transport, at home, etc. They change classes and establishments, continuing their studies. Different equipment is used to carry out these activities: equipment belonging to the establishment, the family, their own... a lot of data is dispersed... The GDPR also introduces the principle of the portability of personal data. VLE users must be able to recover their data when they leave the establishment, however it is dispersed: textbook, notes, e-learning applications, etc. How can users take control of their data according to the SelfData principle? The aim of the Regional education authority is to provide them with a strictly personal digital space connected to the VLE, compatible with any type of device, so that they can collect their dispersed content and store their work. In this way, students will be able to access their data year after year, ensure the continuity of their pathway and, effectively, prepare their portfolio. All the contents of this personal digital space will be retrievable by the owner at any time. We will introduce you to SelfData and our experiment, which is currently based on the CozyCloud solution, an innovative and privacy-friendly open source French alternative.
Biographie de l'auteur
Olivier Adam, RSSI, DPD, Porteur projet ENT, Directeur Technique, Adjoint à la Directrice des Systèmes d’Information et de l’Innovation de l’académie de Rennes, Ambassadeur weber, dit "l’homme aux milles casquettes". Il est à l’initiative du projet national « Identité numérique et relation usager » IPANEMA. C’est dans ce cadre qu’il imagine l’expérimentation d’un cloud personnel Self Data sur lequel il collabore avec : * Thierry Joffredo, référent numérique éducatif, docteur en histoire des mathématiques, sa force de conviction communicative permet d’embarquer les utilisateurs les plus réticents * Yannick Bré, expert dans les usages numériques pédagogiques, ancien enseignant d’électronique, se déplace en trottinette pour être plus proche du terrain * Annabel Bourdé, experte technique en développement, un vrai sourire ambulant, « non official chief happiness officer » ! * Albane Guihomat, experte technique en développement, championne de lutte bretonne, vous êtes prévenus ! * Sophie Schaal, coordinatrice projets sur tous les fronts, en cours de clonage
28 - Centralised management of a network of WordPress websites
Norbert DeleutreAbstract
In view of the proliferation of WordPress websites: * how can we ensure centralised management of site assets? * how can we update a network of sites in one click? * how can we provide default site templates: personal pages, team pages, researchers, doctoral students, administration? While ensuring the highest level of security and optimum user satisfaction? To guarantee this level of satisfaction, and to support the Systems and Network Administrator, there is a solution that is not well known but relevant: Multi-site technology. A multi-site network is a collection of sites that all share the same WordPress facility. For the administrator, this means implementing a single WordPress installation to manage a network of sites. A single interface for administering everything: users - extensions - themes - updates. The network administrator selects the extensions and themes to make available to network users. Users are therefore not authorised to download extensions themselves. Users' power is strictly limited to creating and updating content.
30 - How ESRF manage data flow and data analysis when laboratories produce more and more data
Elodie Benoit - Benoit RousselleAbstract
Since 2009, ESRF has launched a modernisation programme. ESRF-EBS (phase 2 of this programme) aims to design and deliver an extremely brilliant synchrotron light source and to build a new storage ring within the existing structure. This new ring will produce X-rays that are 100 times brighter. In this context, ESRF laboratories will produce large amounts of data at speeds of around 2 to 3 GB/s per thread. The ESRF already has 2 spectrum scale systems (GPFS) of 4 PB each. Some laboratories would like to generate up to 200 TB of data per day. As the experiments carried out can be destructive for the sample analysed, it is important to guarantee a maximum storage access rate to users for all experiments, otherwise data will be lost. 40 experiments can take place in parallel. We need to review the management of experiment data flows. We studied these possibilities: * Online data analysis/reduction; * Cache or scratch systems; * Distributing the load on several GPFS systems; * Testing a GPFS system and the state of the art of market storage solutions (Lustre, BeeGFS); * Studying the alternatives to standard calculation servers - PowerPC architecture and deep learning platforms. What if the solution to our problem was to mix all these ideas? We will detail: * Our ideas; * Our models; * Our tests and their results; * Our progress in the project.
Biographie de l'auteur
Passionnée d'informatique et des sciences depuis son adolescence, Elodie a beaucoup tourné en rond. Après des débuts en tant que webdev, qu'elle essaye d'oublier en rejoignant le club des anciens webdev anonymes, Elodie s'est spécialisée dans l'administration des systèmes Linux puis du stockage depuis son entrée à l'ESRF. Maintenant, Elodie tourne avec les électrons. Quand elle ne travaille pas, elle pratique la plongée sous marine. Elodie n'est pas donc pas dépaysée quand une vague de données est générée à l'ESRF. Heureusement, Elodie et ses collègues sont là pour ne laisser aucune donnée tomber à l'eau. Dev-ops, open-source, nouvelles technos sont probablement les mots qui définissent le mieux Benoit. Après un peu de développement chez Atos, il est passé par Kelkoo et Yahoo afin d'être au top pour l'ESRF ! Comme tout bon geek qui se respecte, il bidouille son imprimante 3D, son drone de course ou son smartphone pendant ses heures perdues. Mais il est surtout content de retrouver ses collègues pour explorer le monde passionnant du Big Data au sein d'un site aussi exceptionnel que le Synchrotron. C'est éclairé par la lumière synchrotron que Benoit, Elodie ainsi que leurs collègues expérimentent et trouvent des solutions d'architecture. En effet, les spécificités des Synchrotrons ne leur rendent pas la tâche facile.
33 - Working as a team with Onlyoffice
Jérémy Maton - Geoffrey Bercker - Karl OulmiAbstract
At the IT Resource Center of the Institute of Biology of Lille (IBL), we are working together daily on the same projects. Many documents, mainly office documents, are created and need to be shared securely and easily in a way to avoid having multiple copies on different computers. Many solutions exist, including standard file sharing. These proven solutions remain unfortunately partitioned to the local network. They often require a particular configuration on computers. To answer this problem, online document collaboration tools exist, directly from a web browser. Among these, there are the giants of the web, often American companies, which offer hosting and online publishing. Unfortunately, these solutions cause other problems: where our data are stored and what is the confidentiality of these data? We chose, at the IBL, to free ourselves from the technical constraints and security concerns of our data by implementing a self-hosted online office tool that can be integrated into our own information system. Among well-known existing suites (Collabora, Onlyoffice), we chose to use Onlyoffice because it uses the OOXML base format. This ensures the best compatibility with the Microsoft Office files we use, and share with partners, daily. We will discuss the advantages, the disadvantages, and the limitations of the Onlyoffice Collaborative Tool for editing and sharing documents. We can notice in addition that this solution offers also, the ability to manage team projects such as the distribution of tasks or project management (Ex: Gantt chart).
Biographie de l'auteur
J'ai commencé ma carrière à Orléans en qualité de sous officier dans l'Armée de l'Air, en tant qu'Administrateur Systèmes et Réseaux spécialisé dans le déploiement de logiciels métiers sur des théâtres d'opération extérieures. En 2007, j'ai rejoins le CNRS tout d'abord dans un laboratoire de Chimie, l'Unité de Catalyse et Chimie du Solide, puis, en 2016 dans une Unité Mixte de Service de l'Institut de Biologie de Lille. Administrateur Systèmes et Réseaux, je conçois et met en œuvre l’évolution des ressources matérielles et logicielles de l’ensemble de l’infrastructure informatique. J’administre également les systèmes (principalement Linux mais aussi Windows et FreeBSD) et veille à la disponibilité et à la sécurité des services proposés.
39 - ezPAARSE-ezMESURE and OpenBadges: Leveraging Open Badges to engage community collaborations and produce better analytics
Thomas Porquet - Dominique LechaudelAbstract
Subscriptions to journals and scientific works are very expensive: the budgets to enable students, teachers and researchers to access them amount to around 100 million euros per year for French Higher Education and Research. The tools used to assess their use are strategic, both in terms of documentary and budgetary policy, and these issues are common to every institution. Since 2013, the ezPAARSE open-source software has enabled each establishment to analyse the traces of users of these resources. Specialising in identification of the resources consulted, ezPAARSE relies on highly collaborative work taking place on the AnalogIST platform, where the structure of each publisher’s platform is deconstructed to determine the semantics of the URLs. This work (declaring a new platform, adding an analysis, implementing the corresponding parser, etc.) is clearly delineated and has recently been validated and rewarded by the delivery of Open Badges, a secure standard defined by the Mozilla foundation, a system of virtual medals exhibited on AnalogIST or on social networks such as LinkedIn, Twitter or Facebook. In this way, the participation of colleagues in a large number of establishments, in France or abroad, is valued and motivated. The system is now supplemented by ezMESURE, the national data warehouse. Based on the Elastic Search + Kibana suite, it is used to view, in the form of “facets” and aggregations, the data loaded from the ezPAARSE local bodies. This environment is the result of a fruitful partnership between Inist-CNRS and the Couperin.org consortium Useful links: Home page: https://www.ezpaarse.org/ AnalogIST: http://analyses.ezpaarse.org/ ezMESURE: https://ezmesure.couperin.org/
Biographie de l'auteur
Thomas Porquet est chargé de mission au sein du département Services & Prospective du consortium Couperin.org depuis 2011. Il participe à la collecte et au traitement des statistiques d'utilisation dans le cadre de divers projets : - Mesure (basé sur le portail JUSP, où les rapports COUNTER des éditeurs sont récoltés et publiés) et le pilote CC-Plus, - ezPAARSE (pour permettre aux institutions de générer des statistiques d'utilisation basées sur leurs journaux) - ezMESURE (où les données ezPAARSE sont recueillies et où les membres de Couperin reçoivent un outil de tableau de bord avancé). Pour ezPAARSE et ezMESURE, Thomas est en charge de faire connaître et promouvoir les solutions proposées dans l’écosystème auprès des membres du consortium, mais aussi plus largement à l’international. Dominique Lechaudel travaille pour le Centre National de la Recherche Scientifique depuis 1994. Il est ingénieur spécialisé dans les projets informatiques innovants liés aux bibliothèques et aux ressources électroniques. Il est l'un des premiers à l'Inist à adopter la méthode agile, en particulier Scrum. En tant que Product Owner d'ezPAARSE et maintenant d'ezMESURE, et en collaboration avec le consortium Couperin.org, il mène la feuille de route et veille constamment à ce que ces solutions open source répondent aux besoins des utilisateurs.
41 - Internal Tools to Enhance Zimbra User Experience
Yoann Mitaine - Benjamin Rocton - Loic Rochas - Pascal PralyAbstract
Since 2012, the Universities of Grenoble have been offering a shared Zimbra messaging service for all staff (14,000 mailboxes and 20 TB of emails) managed by the information system. In addition to personal in-boxes, service mailboxes accessible only by Zimbra shares are also offered. Very quickly, the need to have a graphic tool to manage these shares appeared, as Zimbra does not offer this type of interface natively. An application based on the Cakephp framework as well as SOAP APIs was therefore developed: Gepadbal. In addition, in order to provide functionalities missing from this type of mailbox (absence messages, filters, displaying quotas), we developed our own Zimlets. Another recurring need requested by our users is to be able to archive emails. However,there is no archiving solution from webmail. So, based on our experience of Zimlets, we have undertaken to offer this functionality. In addition to the JavaScript aspect inherent to all Zimlets on the user's webmail, we also had to use Java Server Page technology to process the real-time email archive on the server. We will give feedback on all these tools we have developed around Zimbra to improve the Personnel User Experience. We will explain why we had to develop these tools, the technologies used, the difficulties and sometimes the unpleasant surprises we encountered... This will be an opportunity to assess the sustainability of these tools and the messaging service we offer our users.
Biographie de l'auteur
Les auteurs: Pascal Praly Responsable de l'équipe système DGD SI - Université Grenoble Alpes depuis 2016. Auparavant responsable pendant 10 ans de l'équipe Système et Réseau de l'Université Grenoble 2. Administration des systèmes de messagerie, serveurs Web, serveurs de stockage, annuaire LDAP/Active Directory. Yoann Mitaine Administrateur système DGD SI - Université Grenoble Alpes depuis 2016. A occupé des postes d'administrateur système et réseau dans l'Enseignement Supérieur depuis plus de 10 ans. Responsable technique de la plateforme Zimbra de l'UGA. Sa spécialité: Shibboleth. Benjamin Rocton Administrateur système DGD SI - Université Grenoble Alpes depuis 2016. Auparavant administrateur système et réseau à l'Université Grenoble 2. Administre différents systèmes comme les serveurs Ksup, Zimbra, Puppet, Oracle ... Sa dernière passion: Docker. Loic Rochas Administrateur système DGD SI - Université Grenoble Alpes depuis 2018 Auparavant alternant dans l'équipe système de la DGD SI. Travail sur la messagerie et les serveurs de supervision. Le papa de la Zimlet d'archivage.
42 - Open Science: save, display and share your data
Régis Witz - Julia Sesé - Ana Schwartz - Stéphanie Cheviron - Vincent LucasBiographie de l'auteur
Après plusieurs années à travailler dans le domaine des transports et de la banque, Régis Witz partage aujourd'hui son temps à l'Université de Strasbourg entre ses activités de soutien à la recherche et d'enseignement.
46 - NumaHOP digital content management platform
Olesea Dubois - Pauline Rivière - Fanny Mion-moutonAbstract
NumaHOP is open source software implemented by the ComUE Université Sorbonne Paris Cité with financial support from the City of Paris. It was developed under the direction of the pilot institutions: the Sainte-Geneviève library, the Sciences Po Paris library and BULAC (university library of languages and civilisations). It is freely reusable by any establishment and its source code is available online on GitHub. NumaHOP is an integrated tool that can be used to manage a document digitisation chain from importing notices and the status reports of physical documents to distribution and archiving, thanks to largely automated interfacing between the different stages of digitisation involving the stakeholders: ABES (Bibliographic Agency for Higher Education), digitisation service providers, libraries, ISD, broadcasters, CINES (National Computing Centre for Higher Education). The benefit of this is threefold: - promote the use of standardised formats; - promote the standardisation of working methods; - enable the sharing of know-how between establishments that use NumaHOP. NumaHOP comprises several functional modules that enable: - automatic conversion of notices into different formats; - production of status reports; - exchanges with service providers (files and data); - use of workflow, statistics and project management functions; - quality control of digitised documents; - automatic export of documents to digital libraries and archiving platforms; - production of OCR, METS and derived images.
Biographie de l'auteur
Olesea Dubois Ancienne chargée des projets numériques dans le milieu hospitalier, elle occupe actuellement la fonction de manager opérationnel du service Numérisation et Archivage numérique à Sciences Po Paris, dans la Direction des ressources et de l’information scientifique. Elle a fait partie de l’équipe de numérisation de la bibliothèque Cujas, mais aussi géré le Pôle Groupements de commandes des ressources numériques à l’ABES (Agence bibliographique de l’enseignement supérieur). En outre, c'est une ancienne élève de l’Université de Caen Basse-Normandie (Master Pro Edition) et de l’Université d’Etat de Moldavie (Master Pro Recherche Ingénierie documentaire). Licenciée en catégorie A à la Fédération Française des Échecs et passionnée de parachutisme. Fanny Mion Mouton est responsable adjointe du pôle Flux et Données à la BULAC et responsable de l’équipe signalement et exposition des données. Archiviste paléographe de formation, elle a suivi la formation DCB avant de rejoindre la BULAC en juillet 2013. Dans le cadre de ses fonctions, elle participe à différents projets liés à l’informatique documentaire (SIGB Koha) ou la gestion de la numérisation. Pauline Rivière est chef de projet numérisation à la bibliothèque Sainte-Geneviève. Diplômée en ingénierie documentaire à l'Université Toulouse le Mirail, elle commence à travailler dans le monde des bibliothèques en tant que chargée des applications en informatique documentaire des bibliothèques municipales de la ville de Lille. Suite à la réussite du concours d'ingénieur d'études, elle rejoint la bibliothèque Sainte-Geneviève en 2012 où elle assure la coordination transversale des projets de numérisation de la bibliothèque et met en place les partenariats de la bibliothèque pour les projets de numérisation.
54 - High performance computing and energy efficiency: focus on OpenFOAM
Cyrille Bonamy - Laurent Lefèvre - Gabriel MoreauAbstract
High performance calculation is increasingly used within society. Previously reserved for an elite, based on large computing and storage infrastructures, it is now a core module for many companies. Indeed, high performance calculation makes it possible to design and optimize many elements for a limited cost ,compared to the production of prototypes or tests in situ. It is also widely used in big data and artificial intelligence. It seems essential to ask about theenvironmental impact of these digital practices. A number of actions have already been initiated in this community: GREEN500; European CoC eco-responsibility label for Data centres... but these actions generally look at specific or even idealised situations and/or software. The software qualification process in the field of high performance calculation consists in looking at the scalability of the software. The originality of this study is to focus on energy scalability (calculation return time depending on the power consumed), by considering several architectures (three TOP500 machines and a laboratory cluster). The energy cost of an example calculation could be estimated, which shows that the most efficient machine in terms of calculation time is not necessarily the most energy efficient, and depending on the number of cores/processes chosen, it is not always the same architecture that is the most energy-efficient. It was therefore possible to show that the longer the user is prepared to wait, the less energy is used by the calculation.
63 - RENATER, partner of the CLONETS project for examining the construction of a European metrological network
Nicolas QuintinAbstract
A scientific and technological revolution is taking place in the way that frequency and time reference signals are distributed. The development of new techniques in recent years has made it possible to use fibre optics as a medium for this distribution, by demonstrating better performance, of several orders of magnitude, than traditional GNSS methods (GPS, Glonass, etc.), over distances that go up to continental scales. CLONETS (CLOck NETwork Services) intends to prepare the transfer of this new generation of time and frequency distribution technology to industry. Sixteen European players representing all the parties involved (national metrological laboratories, national telecommunications networks for research and education, plus several companies) worked together to study the challenges of deploying a pan-European metrological network based on atomic clocks and to define the best strategy for implementing it. By offering very high performance time and frequency services, this network will be able to meet the rapidly growing needs created by Cloud Computing, the Internet of Things, energy distribution (smart-grid) and paves the way for the deployment of new innovative applications, as GPS did in 2000 when it opened up to the general public.
69 - State of the WebRTC in 2019 and it’s daily usage on RENdez-vous a video conferencing service at a national scale.
Damien FetisAbstract
At the end of 2010, discussions began on WebRTC, a real-time communication protocol in the browser. Very quickly, Google makes its version of WebRTC accessible to all users of its browser. A complete, efficient and free implementation. This technology is now accessible to a large number of developers and many new videoconferencing solutions emerge. In 2015, RENATER offers RENdez-vous, a new WebRTC-compatible service. This service for Higher Education and Research is based on Jitsi-Meet, an open-source video conference project. Nine years after work began on the WebRTC standard, version 1.0 of the API browser has still not been finalised. The various RFCs supplying the protocols used by WebRTC solutions are still mostly in draft stage. However, WebRTC is now omnipresent in all video conferencing devices and solutions. After four years of using RENdez-vous, we offer our feedback on the use of WebRTC. This experience allows us to take stock of the real situation of WebRTC in 2019. In particular, we describe how the various changes to this standard and its implementations impact the Jitsi-Meet project and our department. Jitsi-Meet offers a very comprehensive solution with a large community of developers. This project is very responsive to the different changes in WebRTC in the main browsers. However, there is a certain complexity in its architecture and its developments. Lastly, we propose a discussion on what the next version of WebRTC may involve in terms of development of videoconferencing services and on RENdez-vous.
75 - UNCloud, from 0 to 10 000 users in 1 year
Matthieu Le Corre - Arnaud AbélardAbstract
In 2017, The University of Nantes, France, launched UNCloud, a web service project aiming to facilitate interaction and collaboration between the faculty, staff and students as well as external collaborators. Both a storage service and a collaboration platform, offering 100 GB to their 70,000 faculty and staff members and students, UNCloud has become, with its 10,000 users in only a few months, a cornerstone of the establishment. To begin, we’ll discuss the politics that led to the birth of the project in a context of mistrust towards free consumers services in an unclear legal landscape. After studying several alternatives, the opensource solution Nextcloud was selected. We will consider its implementation, both technically and organizationally. Special attention will be given to the test phase and its management. We will explore the technical challenges that led us to building a fully redundant infrastructure and we’ll explain its deployment in detail. Now in production for almost two years, a new era starts : transforming a sharing platform into a whole digital collaboration environment by integrating the existing collaboration services in use at the university : email, calendar, learning management systems, etc. Finally, we’ll draw on our experience and the lessons we have learned. Especially in user support and technical optimization.
80 - CANCELED - Inequitable Digital Systems
Chantal Enguehard - Anaïs DanetAbstract
The development of digital applications has resulted in a multiplication of digital interactions, which are sometimes imposed on users. Unfortunately, these applications have a tendency to malfunction. It is these malfunctions and more specifically their consequences that this study intends to address. Our cross-sectoral observations as a lawyer and IT expert have led us to highlight the existence of Inequitable Digital Systems, digital devices that intervene in a legal relationship between two legal subjects and whose traces could be used as evidence of the performance or non-performance of an obligation. One example is the validation of digital public transport tickets. As these digital systems are implemented by only one of the two parties to the legal relationship, an imbalance may appear. When a dispute arises, only this party has access to the traces that prove the performance of the legal obligations. However, these disputes remain largely invisible in the legal sphere due to their low financial value, resulting in the implementation of extrajudicial processes for amicable resolution of disputes. So how can the balance between the parties be restored? Although equal access to digital traces would undoubtedly be a step towards equality of the parties, the question will arise as to the reliability of these traces as well as a possible right of access to this data. A better solution may be to anticipate malfunctions by the IT specialists behind these systems. These are the questions that we will reflect on in this contribution.
Biographie de l'auteur
Chantal Enguehard est maître de conférences en informatique à l'Université de Nantes et membre du LS2N (UMR CNRS 6004). Elle a obtenu au préalable un diplôme d'ingénieur de l'UTC (Université de Technologie de Compiègne) en Informatique en 1988 et a effectué sa thèse au CEA (Commissariat à l'Energie Atomique). Ses activités sont présentées sur la page http://pagesperso.ls2n.fr/~enguehard-c/ Anaïs Danet est professeure agrégée de droit privé et de sciences criminelles à l'Université de Reims-Champagne Ardenne et membre du CEJESCO (Centre d'Etudes juridiques sur l'efficacité des systèmes continentaux- EA 4693). Elle a effectué sa thèse à l'Université de Bordeaux sur le thème de "La présence en droit processuel", soutenue en 2016. Elle s'intéresse ainsi à la justice sous ses différentes facettes (modes juridictionnels et amiables de résolution des litiges, procédure civile et pénale).
94 - “Elapsed Time": still relevant for invoicing?
Emmanuel QuemenerAbstract
Is “Elapsed Time” still relevant for reinvoicing? A look back at a few years of metrology... "An accumulation of time is no more a consumption of resources than a pile of stones is a house!" In our resource centres, most metrology (and invoicing) is often based on “elapsed time”, sometimes “user time”. The arrival of massive data processing, particularly from biology communities, is now creating a new paradigm: the main "stress" of our infrastructures is no longer based solely on demand from processing units but on the transport of data, all transport. Through an examination of metrology files covering several years of operation and originating from the Blaise Pascal Centre (project centre and test centre) and the PSMN (calculation mesocentre) of ENS-Lyon, we will see that we now need to take other metrics into account to best assess these new uses, and therefore enable a better breakdown of their costs.
103 - Deployment of Inserm's electronic laboratory notebook and new perspectives
Paul Guy Dupre - Claudia Gallina-mullerAbstract
Deployment of Inserm's electronic laboratory notebook and new perspectives Auteurs : Paul Guy DUPRE, Claudia Gallina-Muller Since 2013, Inserm's ISD has been interested in Electronic Laboratory Notebook solutions (ELN). The experimentation phase was presented at JRES 2017. At that time, we proposed a definition of the ELN and explained the challenges. The solution is used to describe and document the entire confidential phase of a research project. Some challenges remain: the laboratory notebook must meet legal and contractual obligations, in particular by providing proof of the invention and its inventors. Digital input has multiple benefits. The ELN improves the traceability of research, the fight against fraud and data management. It facilitates quality procedures and patent filing. It can provide new functionalities, such as collaborative work, remote access, project management, document management and access control. CLÉ, Inserm's Electronic Laboratory Notebook, has offered a solution for biology since 2018 and is also support inventories and equipments management. We will detail the call for tenders, architecture and change management aspects of this project. The ELN has a central position in the laboratory information system. It can provide solutions to meet current challenges, improve data management and the reproducibility of experiments. CLÉ can also interface with laboratory databases and our service catalogue (bibliography, financial management, time sheet, risk management, diaries, raw data storage, EDM). In conclusion, we will discuss the possibilities for interfacing with research data processing tools.
106 - EOLE distribution
Luc BourdotAbstract
EOLE Linux is the association of a GNU/Linux distribution (Ubuntu, in this case) with specific integration and administration tools resulting from development carried out by the Open Source Software skills centre of the Ministry of National Education. There are currently approximately 25,000 EOLE servers deployed mainly in schools, regional education authorities and local authorities, but also, under the impetus of the Ministry of Ecology, in the Departmental Local Directorates (DDT). EOLE solutions are included in the interministerial basis for open source software (SILL). For 17 years, EOLE has been supporting the Ministry of Education's major digital projects. The use of open source, scalable, adaptable software, agile governance that is as close as possible to user needs, has all made it possible to monitor new digital uses without technological disruption and at a lower cost. EOLE meets the following objectives: - compliance with legal requirements (intellectual property, personal rights, etc.); - modular, therefore scalable and open to market standards; - ease of implementation and mass deployment; - remote administration. The EOLE distribution is organised into modules. A Module is a consistent set of selected software designed to meet a specific business need, such as a firewall. With our solutions, you can easily and quickly deploy an integrated, secure infrastructure with services such as SAMBA4, Apache, Nginx, MariaDB, StrongSwan, E2guardian, EoleSSO, etc. By using our Zéphir module to manage your computer assets, you can centralise all your configurations, orchestrate and monitor the services.
118 - Pod: podcast platform
Nicolas Can - Florent FareneauAbstract
Created in 2014 at the university of Lille, the POD project has been managed by the Esup Portail consortium and supported by the Ministry of Higher Education, Research and Innovation since September 2015 . The project and the platform of the same name are aimed at users of our institutions, by allowing the publication of videos in the fields of research (promotion of platforms, etc.), training (tutorials, distance training, student reports, etc.), institutional life (video of events), offering several days of content. The interest in POD continues to grow with the help of some 10 contributing establishment and some 30 establishments deployed the software within the Higher Education and Research community and in other structures such as Teacher Training Colleges (Brittany) or Regional education authorities (Caen). Building on the enthusiasm and interest in the POD project and with the support of the Esup consortium, in 2018 we launched the development of version 2, released in September 2018. Currently in version 2.1, the interface allows the user to upload videos, add contributors, associate documents and user licenses, enrich them (overlays, subtitles, captions) and offer them with adapted flows and new types of content (360° videos). In addition to the features already offered, such as chaptering or live broadcasting, we are continuing its development, with automatic video transcription, synchronised and shared note-taking. We intend to present the functionalities of the tool, the community management of the project and its maintenance before discussing its architecture, deployments or development and its documentation based on use cases at the University of Lille and the Université Polytechnique Haut-de-France.
Biographie de l'auteur
Nicolas Can est responsable de l'équipe web au sein de la direction des systèmes d'information de l'université de Lille depuis septembre 2018. Florent Fareneau est directeur adjoint délégué aux systèmes d'informations au sein de la direction des systèmes d'informations de l'université polytechnique Hauts-de-France depuis janvier 2018. Ingénieur d'étude en développement et déploiement d'application depuis 2008, Nicolas a commencé sa carrière dans le public à l'université de Pau avant de rejoindre le service enseignement et multimédia de l'université Lille 1 en 2011. Nicolas a occupé dès lors plusieurs fonctions comme administrateur technique de Moodle, développeur en détachement auprès de France Université Numérique, expert au sein du groupe de travail Agimus du consortium Esup ou encore enseignant vacataire. En 2003, Florent a rejoint la communauté naissante Esup-portail en qualité d'expert infrastructure. Ensuite, il a intégré la coordination technique de ce consortium en qualité de représentant de l'UPHF en 2005. En 2015, Nicolas a rejoint la direction des systèmes d'information de l'université Lille 1 pour occuper le poste de responsable informatique du learning center "Lilliad" qui a ouvert en septembre 2016. Florent participe activement à la vie de la communauté Esup. En particulier, il a pris en charge l'animation du groupe de travail socle de l'ENT en 2016 et participe aux différents projets ESUP-portail: indicateurs, authentification, infrastructure, OAE, gestion de vidéo ... Ainsi, tout début 2015, c'est tout naturellement qu'il a œuvré à la mise en place, dans son établissement, de l'une des toutes premières instances de Pod au niveau national et dès lors n'a cessé d'accompagner ces évolutions. C'est en 2013 que Nicolas a commencé le développement de l'application Pod. Mise en place en avril 2014 à l'université Lille 1, cette application est gérée depuis septembre 2015 par le consortium Esup. Nicolas a, par la même occasion, intégré la coordination technique d'Esup pour devenir développeur/coordinateur du projet et responsable de l'atelier gestion de vidéos, fonctions qu'il occupe encore actuellement.
119 - Drive RENATER
Alexandre SalvatAbstract
Over the past decade, various Internet players have been increasing cloud data storage offerings with, in some cases, additional features. However, the equilibrium of the economic model is often ensured on the one hand by a usage that becomes time-consuming or depending on the use and, on the other hand, by the exploitation that can be made of data and metadata. To address these issues, many institutions have implemented their own solution, thus constituting a rich functional and application ecosystem. However, several issues remain: • How can users of different community platforms share data in an authenticated and trusted way? • What kind of architecture to meet the needs of hundreds of thousands of users? • What mechanisms to allow geographic distribution this type of service? • How to guarantee the minimum levels of security, in particular on the control of access to the service and stored information? Following the evaluation of several free solutions likely to provide a "drive" type service, GIP RENATER has started a process of building a highly scalable solution in terms of access control, capacity (users, volumes, etc.), distributed deployment and possibly interoperable with other similar services in the community. The implementation challenges are multiple and concern as much the choice of the solution as the design of the associated technical architecture as well as taking into account the changes and the organization of the MCO.
133 - Migration of the University of Strasbourg messaging system to a shared collaborative tools solution
Xavier Pierre - Sébastien Finkbeiner - Laurence Moindrot - Simon Piquard - Patrick HoffmannAbstract
In the context of the constant evolution of digital technology, and the daily use of IT tools, it is important to implement personalised use of each service provided to users. To take into account users’ expectations and needs, several areas of digital tools development lie at the heart of the University of Strasbourg's general information strategy. Of these tools, messaging is now one of the most well-known applications and also the most frequently used. In addition to messaging, we are now asked to provide a set of collaborative tools around it. In line with this dynamic of adaptation to needs, we have developed the messaging renewal project towards implementation of a collaborative suite, fully integrated with our information system. In 2009, the University of Strasbourg decided to deploy the SOGo solution as a messaging/diary service to all its Osiris users and partners. For several years, SOGo has not met our users’ expectations at all, specifically because it is a "simple" messaging/diary tool. In addition to the functional aspects that caused dissatisfaction among users, the study also focused on the choice of hosting: self-hosting, cloud, SAAS, Third-Party Application Maintenance? In 2019, the University decided to migrate its messaging system to the Renater Sharing solution, which met the various expectations. The purpose of the presentation is not to present Sharing, but to present the choice of a shared tool within our community.
Biographie de l'auteur
===== Xavier PIERRE ===== Mes débuts dans la vie active se sont faits dans le monde du traiteur. J'ai ensuite décidé de reprendre les études afin de faire de ma passion pour l'informatique ma nouvelle activité. Pour cela j'ai privilégié l'alternance en passant par un BAC, un BTS puis une licence. Ces années d'études et d'alternance m'ont permis d'acquérir des compétences techniques et relationnelles. J'ai eu la chance d'être recruté comme responsable du domaine de messagerie à la direction du numérique de l'Université de Strasbourg (2017). Direction du numérique au sein de laquelle j'évolue en tant que responsable de domaine (messagerie), administrateur d'applications collaboratives (Seafile, Request Tracker, Itop, Sympa, etc.). En tant que chef de projet technique, j'ai mené le projet de remplacement de l’anti-spam de l'université, puis le renouvellement de la messagerie en passant de SOGo à PARTAGE. J'ai également eu l'opportunité d'enseigner en licences math/info ce qui a été très enrichissant. Tous ces éléments confirment mon intérêt quotidien pour le domaine de l'informatique et m’incitent à toujours m'améliorer afin de mettre mes compétences au service des autres. ########################################################## ===== Simon Piquard ===== "Si j'étais né il y a 2000 ans, je serais né à Rome" Il l'énonce et tout s'éclaire, nous comprenons immédiatement que Simon est originaire de Nancy en Meurthe-et-Moselle, ville lumière, centre du monde moderne. Côté étude, c'est pas Byzance comme dirait l'autre. Il obtient à la surprise générale un baccalauréat ES, puis, en 2005, malgré le scepticisme ambiant, un BTS Action Commerciale en contrat de qualification. Tour à tour manager dans un centre de profit, responsable d'un service traiteur, attaché-commercial, il erre ensuite dans des jobs qui ne l'intéressent guère, déménage 3 fois en 3 ans pour au final s'installer dans la charmante et pétillante capitale alsacienne, Strasbourg. Peu de temps après son arrivée il intègre l'Université de Strasbourg naissante (2009), en tant que coordinateur pour le déploiement de la nouvelle carte étudiante et professionnelle "Mon Pass Campus Alsace". Il est content, les chefs aussi, il reste. 10 années plus tard, il a accumulé quelques casseroles, et surtout beaucoup d'expériences en tant que chef de projet MOA (Pass Campus Alsace, Copieurs MFP en libre service, PGI scolarité Alisée, box/cloud unistra Seafile, Partage unistra, Contrôle d'accès Physique dans les bâtiments), correspondant communication, formateur et responsable conduite du changement (Windows 10, Ernest). Il vient à vous aujourd'hui humblement, pour vous parler conduite du changement, usages & approche utilisateurs sur les récents projets Ernest ("Mise en place d'un nouvel environnement de travail social et collaboratif : Une approche centrée utilisateur"), Partage ("Migration de la messagerie de l'Université de Strasbourg vers une solution mutualisée d'outils collaboratifs") et outils de stockage (poster "Quelle solution de stockage choisir dans la prochaine décennie ?"). Avé
134 - Deploying the 4P high-throughput phenotyping data processing platform on the France Grilles infrastructure
Vincent Nègre - Eric David - Marie Weiss - Philippe Burger - Romain Chapuis - Boris Adam - Anne Tireau - Patrick Moreau - Antony Tong - Gallian Colombeau - Samuel Thomas - Pascal Neveu - Jérôme Pansanel - Frédéric BaretAbstract
The PHENOME-EMPHASIS [1] project, involving INRA, Arvalis and Terres-Inovia, aims to develop high-throughput phenotyping infrastructures at national level. Field acquisition systems (Unmanned Aerial and Ground Vehicules) carry various sensors (high-resolution RGB, multispectral and thermal infrared cameras, LIDARs) which generate a large volume of images that must be processed, stored and archived. The prototypes of the data processing chains developed by the UMT CAPTE [2] have been industrially produced and integrated into the Plant Phenotyping Processing Platform (4P). These modules encapsulated in Docker containers can be sequenced in workflows based on the Cromwell processing engine. Docker Swarm is used to distribute container execution on a cluster. The raw and processed data are stored on a distributed architecture based on iRODS technology. The 4P platform is connected to the PHIS information system [3] in order to store and organise the data produced by the PHENOME-EMPHASIS project, according to the FAIR principles. The 4P platform is fully integrated into the France Grilles infrastructure [4], a shared infrastructure for the calculation and storage of scientific data that provides different services to users. To deploy the 4P platform, we used the FG-CLOUD service for the application part and the FG-IRODS service for the persistent data storage part. The poster will detail the functionalities offered by the 4P platform, the technologies used and the technical infrastructure, in particular the integration within PHIS and France Grilles. [1] https://www.phenome-emphasis.fr/ [2] https://www6.paca.inra.fr/emmah/Programme-scientifique-et-Equipes/Equipe-CAPTE [3] http://www.phis.inra.fr [4] http://www.france-grilles.fr
144 - Collaborative cloud
Camille HerryAbstract
The concept of collaborative work is not new, but in recent years it has taken on a new dimension with the intensive use of IT tools and the Internet, which offers new prospects for the organising work and implementing projects. The staff at the University of Lorraine have a growing need for internal and also external collaboration, with colleagues from other universities or partners. To meet these new needs, the University of Lorraine has set up a collaborative working environment, using a synchronisation and file sharing service based on the Nextcloud solution, supplemented by an open-source Onlyoffice online office suite. My presentation will focus on these different tools: * infrastructure put in place to ensure high availability * installation and routine administration. * The choice of technologies (Nextcloud / Onlyoffice) and feedback after one year of use
Biographie de l'auteur
Assistant Ingénieur à l’université de Lorraine (UL), Camille HERRY travaille à la direction du numérique, sous direction infrastructure et services. Administrateur Système et Réseaux, il travaille depuis Octobre 2018, dans l'équipe Intégration et Virtualisation en charge de : - Service d’annuaire (LDAP) - Services centraux d’authentification (CAS, Shibboleth, radius …) - Service de messagerie - Infrastructures d’hébergement réparties dans trois salles serveurs Auparavant, il était responsable d'une équipe informatique sur le campus ARTEM à Nancy, rattaché à la direction du numérique, sous direction service aux usagers. il était également administrateur de l'Active Directory de l'établissement parmi une dizaine d'informaticien.
154 - IFB-Biosphere, Cloud services for the Analysis of Life Science Data
Christophe Blanchet - Olivier Collin - Matéo Boudet - Stéphane Delmotte - Hervé Gilquin - Jean-françois Guillaume - Efflam Lemaillet - Jonathan Lorenzo - Olivier Sallou - Bruno Spataro - Jérôme PansanelAbstract
The French Institute of Bioinformatics (IFB) offers different services for the processing of life sciences data, in part based on a federation of academic clouds. The Biosphère portal (https://biosphere.france-bioinformatique.fr) provides several interfaces to simplify the use of the IFB cloud: the RAINBio catalogue of model environments (appliances), a dashboard to manage deployments and a register of available public data. The IFB-Biosphere federation, initiated at the end of 2016, includes 5,200 cores and 26 terabytes of memory, divided between five sites based on Openstack, federated with the Nuvla system. In addition to the basic components, more specific components such as Manila for file-mode shared volume delivery are required for the majority of bioinformatics applications. User management is based on the institutional credentials of the eduGAIN identity federation, with a keycloak proxy and OpenID Connect clients. The bioinformatics appliances offer many common tools for the analysis of biological data, 32 of which are currently published in the RAINBio catalogue. These environments provide tools such as ‘Conda’, ‘Docker’ or ‘Ansible’; high-level scientific interfaces (Rstudio or Jupyter Notebook web portals), or a remote graphic desktop. Some environments include several components that rely on virtual machines or containers. The basic extendable quota enables the deployment of VMs, with up to 128 cores and 3 TB of RAM. The IFB-Biosphere Cloud is used for scientific analyses that can be intensive (4,000 cores), and by many training sessions, scientific schools, university Master degrees, workshops or hackathons.
Biographie de l'auteur
Christophe Blanchet est membre du Centre National de la Recherche Scientifique (CNRS), avec vingt ans d’expérience en bioinformatique et calcul scientifique pour les sciences de la vie. Actuellement en poste à l’Institut Français de Bioinformatique (IFB, CNRS UMS3601), il s'est impliqué depuis 2001 dans de nombreux projets et infrastructures relatifs au calcul distribué pour la biologie (EDG, EGEE, EMBRACE, STRATUSLAB, CYCLONE, ELIXIR...). Depuis 2014, il co-anime avec Olivier Collin (CNRS, resp. de la plateforme GenOuest) un groupe de travail national sur les infrastructures bioinformatiques et l'utilisation des infrastructures cloud pour la biologie. Groupe de travail auquel les auteurs de la présentation à JRES 2019 participent activement, pour la plupart depuis sa création. Cette collaboration entre ces experts des systèmes, du cloud et de la bioinformatique, a conduit à la mise en place dès 2016 de IFB-Biosphère, fédération nationale de clouds pour le traitement des données des sciences de la vie, regroupant six sites de l'IFB, plateformes bioinformatiques propres de l'IFB ou en collaboration avec des mésocentres.
168 - Deep Learning: I love you too...
Parouty Jean-lucAbstract
Reproducing or simulating "intelligence" is an objective that has long occupied us and whose first scientific works are older than computers. Long overlooked, machine learning techniques and more particularly deep learning, based on neural networks, have made exceptional progress in recent years. Carried by the major Internet players, these technologies have become essential in the "big data" revolution. By linking experimental and modelling and opening up new approaches in the scientific method, these technologies have considerable potential in most of our scientific fields. The objective of this presentation is to provide simple and synthetic elements of understanding on what deep learning is, its principles and uses. We will approach this presentation as a triptych: * The first part will tell the tumultuous and uncertain story of these artificial neurons from 1940 to the present day, * We will then explore the different architectures and their possible uses, based on short demonstrations, * We will finish (probably late) on the service offers and future prospects, especially regarding our infrastructures and businesses. The whole presentation will be accessible to our natural intelligences 3.0 and + Many thanks to www.DeepL.com/Translator and his artificial neurons for this translation !
Biographie de l'auteur
Jean-Luc Parouty est chargé de mission « Appui au calcul scientifique » au sein du laboratoire SIMaP (Science et Ingénierie des Matériaux et Procédés) à Grenoble. A ce titre il accompagne le laboratoire dans la coordination et le pilotage des activités scientifiques numériques en apportant une expertise dans la mise en œuvre des moyens techniques et méthodologiques et en renforçant la synergie entre les activités de calcul et expérimentales. Cette expertise concerne en particulier l'intelligence artificielle. SIMaP est associé à l'institut d'intelligence artificielle MIAI (Multidisciplinary Institute in Artificial intelligence) par le biais d'une chaire commune avec le LIG (Laboratoire d'Informatique de Grenoble). Précédentes contributions aux JRES : Technologie blockchain : Ange et/ou démon, pourquoi Bitcoin est-il incontournable ? (2015) Internet et usages : Google m’a tuer (2011)
214 - OpenNebula: 100% Open Source elastic computing
Daniel DehenninAbstract
In the french ministry of education, there is a Libres Software team since 2001 building solutions for schools. In constant seek for improvement in the development of the EOLE GNU/Linux distribution, we discover in 2012 a completely unknown galaxy. You are in charge of the infrastructure of a 2 persons team, you are lacking time but want the benefits of virtualisation? You are part of a 50 persons team managing the infrastructure for 10 000 people? We invite you for a trip through our background and community feedbacks to discover the virtual infrastructure management system called OpenNebula.
Biographie de l'auteur
Libriste depuis la découverte des ordinateurs à l’université, j’ai cherché un travail sans compromission avec mes valeurs. Après avoir passé 8 ans a déployer et maintenir en production la distribution GNU/Linux EOLE dans l’académie de Caen, j’ai intégré en 2011 l’équipe de développement basée à Dijon afin de parfaire mes connaissances œnologiques.
16 - Active Directory in universities: an approach based on security
Emmanuel Mesnard - Xavier GirardinAbstract
The use of Active Directories is widespread in companies and public organisations such as Higher Education and Research. At the University of Reims Champagne-Ardenne, each component had its own authentication system until now, which had no relationship to the central information system. This meant it was not possible to trace authentication on the workstations. To unify authentication and identification methods across our university's stock of computers, we decided to implement a secure centralised AD domain. To carry out this process, we were supported by a security expert and followed the Best Practices concerning AD. Our main objectives were certainly to improve the service to users, but also and above all to secure our new architecture. The news of attacks on ADs demonstrates that security must not be neglected during implementation, so we will address the following points in this presentation: * the implementation of PKI-type architecture enabling an IPSec link between servers, RDP security, PowerShell script restriction, and the synchronisation of our reference standards; * the different FSMO (Flexible Single Master Operation) roles and their isolations; * the organisational and functional part of AD; * rolling out LAPS (Local Administrator Password Solution) on the workstations; * the AD audit that we carry out in a continuous improvement approach, with two open source PingCastle and BloodHound tools, which give us security indicators.
Biographie de l'auteur
Xavier Girardin est informaticien de proximité à l’université de Reims Champagne-Ardenne. Il est notamment en charge de la gestion du parc informatique, des serveurs et des applicatifs de la bibliothèque Universitaire répartie sur 10 sites dans la région Champagne-Ardenne. Il est membre actif dans différents projets portés par le service informatique de proximité en apportant une attention plus particulière sur le projet Active Directory dont il est l’administrateur et le chef de projet. Emmanuel Mesnard est ingénieur sécurité et RSSI à l'Université de Reims Champagne-Ardenne. 3/4 de son temps est consacré à l'opérationnel et le reste au fonction de RSSI. Il apprécie particulièrement l'aspect réponse à incident à travers l'analyse et traitements d'incidents (Forensic) et analyse de flux réseau à titre professionnel et personnel, notamment en étant contributeur actif pour le site de challenges en sécurité "Root-Me" avec la création de différents challenges en "Forensic".
49 - SIEM deployment and data enrichment
Thibaud BadouardAbstract
The use of a Security Information and Event Management (SIEM) solution within RENATER came up against two challenges that we underestimated: the burden of incident analysis and the complexity of data source management. In our poster, we invite you to discover what we implemented to alleviate these difficulties. We will address the following topics: * The enrichment of events reported by the SIEM with : * external information sources (IP reputation sites, public blacklists) to increase the level of confidence in the results provided (is this machine part of a botnet? Have other companies observed similar events?) and facilitate the analysis work; * internal information sources (internal databases, IP management tools) to adapt the response to be provided (is the event source bellong to RENATER ? To an university ?) and be able to automate part of the incident response. * Improving data source management by interfacing the SIEM with the virtual infrastructure to reduce the number of unnecessary events analysed and facilitate the recovery of new event types.
62 - Improving confidence in Éducation-Recherche identity federation
Geoffroy Arnoud - Guillaume Rousse - Anass ChabliAbstract
The Research-Education identity federation enables secured and simple access to online services for research and education community. It groups together more than 300 universities and research centres; and more than 1200 services. The relationship mainly rely on trust between participants based on both a technical and organizational framework, defined by GIP RENATER. It is therefore mandatory to keep a high level of confidence in a still-growing federation. To reach this objective, actions are taken to control and improve the data quality, both at national and international (eduGAIN) level. This is done in the following areas: * Updating technical and organizational framework * Setting up controls, to enforce framework respect * Trigerring corrective actions, to ensure framework alignment * Data collection and global indicators construction about federation usage We will present in this talk the progress of the work allowing RENATER to meet its objectives: - Increase reliability; by improving interoperability between entities and lower support activities, - Improve user experience, by ensuring federated services availability, adequate presence into a national federation, and the quality of the metadata (name, logo, description) - Improve trust, by enforcing the frameworks defined by RENATER, recall the requirements to consider GDPR, and encourage commitment into optional certification (SIRTFI, R&S, Code of conduct) - Provide authentication metrics and statistics at federation level.
Biographie de l'auteur
Geoffroy, Guillaume et Anass font partie de l'équipe qui opère le service fédération d'identité mis à disposition de la communauté éducation-recherche par RENATER.
65 - FALCON: a practical, useful tool for the Systems and Network Administrator community
Jean-luc Evrard - Marc Herrmann - Virgile Jarrige - Thomas Keller - Yasmina Ramrani - Sébastien SchmittAbstract
“I have to install an electronic certificate on a web server. I have known how to do that, but I am not remembering well.” And for good reason… It was 42 years ago that our NSA colleague Tony Hustle had just join the CNRS. Alone in the backyard of his lab, with his barbel blue pull-over, Tony is afraid of losing precious time to find this lost information in his memory. He thinks that if he had written a procedure at the time, everything would be simple. He thinks that other NSAs must have had the same problem. He is thinking a lot. The security of his web server is at stake. “And now? What will I do?” sighs our colleague. Dig into his archives to find a network course of Maïté Elaisse or Jean-Luc Archibeau? Consult the Chabanne del Más shaman then recompile the cryptokernel of the server in plane mode? Wandering on the internet in search of a nice altruistic NSA who wrote and posted this precious procedure? Wander on the internet and meet (or not) the doctor FALCON? FALCON holds the answer to Tony’s problem. FALCON is also an aid for all Tony NSAs in France and the surrounding area. Would you also like to take advantage of the experience of a few beneficent colleagues? Besides, would you be willing to share your knowledge in small pieces by feeding a practical, collaborative and indispensable IS repository? Come to FALCON before FALCON comes to you. FALCON is multi-made for you.
68 - Should we believe everything we are told?
Stéphan DavidAbstract
Encrypt accountancy of an novice victim, with a good ransomware to extort some bitcoins, High school student playing Mr. ROBOT and following an online youtube "tutorial" to Ddos his school from his own Internet box, an angry president launching a massive attack presumably because of a broken drone, cyber attacks are groing up. 48% in 2018 and regularly on the TV news. To face this a multitude of new tools of detection, correction and supervision arrived promising us a flawless efficiency for our operational security. Added to this a "trend" effect for Cyber intelligence, coming from artificial intelligence, it can learn autonomously to detect and even correct any flaw in our information systems. What is it in reality? I propose you to share a concrete experience feedback on several tested solutions in production on National Education datacenters. Vulnerability scanner, cybermenaces detection, SIEM, all tools that should eliminate all threats and yet require that we look more closely at the results announced. Are these tools totally autonomous? What place will be left for humans facing these tools that are always processing more and more information?
Biographie de l'auteur
Stéphan DAVID a commencé ses premiers pas en décompilant des jeux comme "Prince of Persia" ou "Bomb Jack" pour passer le nombre de vie de 03 à FF. Il démarre ensuite en Université comme admin sys et réseau et gère des plateformes d’enseignement à distance avant de s’orienter sur des mises en place de solutions de sécurité. Passionné de cybersécurité, Stéphan a rejoint en 2015 l’équipe nationale réseau et sécurité du Ministère de l’Education Nationale. Ses missions se concentrent sur l’audit et la sécurisation opérationnelle des datacenters de l’Education Nationale. Il assure le déploiement du SOC avec les différentes équipes du Ministère. Il regrette de ne pas avoir plus de temps pour faire des CTF car il adore ça. Stéphan apporte ici un retour d’expérience sur la mise en place d’outil de sécurité et particulièrement de détection de vulnérabilités sur le périmètre des Datacenters de l’Education Nationale.
77 - Outgoing proxy with HTTPS suppport without encryption loss
Laurent VerheirstraetenAbstract
The objective of the project is to prevent the downloading of hacking tools to servers via an outgoing session, while allowing legitimate updates of components, operating systems and application software. The architecture used is an outgoing proxy (Squid). Transparent mode is essential because the legitimate web clients are too numerous and varied to configure each one. The implementation principles are: * Active network device redirects all outbound TCP sessions at ports 80 and 443 to a proxy server that filters what is allowed; * Proxy Squid rejects what is not on the white list. The emphasis is on "Peek and Splice" technology, which uses the Server Name Indication (SNI) element exchanged before the encryption is effective in the new TLS upgrades. This SNI is therefore available to filter the outgoing sessions with a white list, without decryption of the stream, which is simply transported via a TCP tunnel. With this solution, it is no longer necessary to generate certificates on the fly for the proxy to function in truly transparent mode. On this basis, the project therefore essentially consists in drawing up a white list of Fully Qualified Domains (FQDN) covering all update requirements after a log observation period. Verification of the logs may, if necessary, enable detection of any download attempt that might seem suspicious.
95 - Learning about IT security using embedded systems
Egea Philippe - Olivier Fruchier - Faissal BakaliAbstract
“You have just been recruited as a private detective by Mr Bob Hésite, CEO of TestSol, which specialises in solar thermal and photovoltaic panels. For your cover, you are hired as an industrial IT specialist for your skills in electronics and IT. Bob Hésite believes that you will successfully complete your new duties and demonstrate all the abilities described in your CV, both in espionage and in industrial IT expertise.” Here’s how our serious game starts! For the game to work properly, it’s important to place students in a fun, dramatic context. We designed our scenario in the form of a police investigation. This survey leads students through the scenario and helps them to improve in solving problems step by step. The names of the characters must be clues to the scenario and guide them towards deciding who is guilty. The serious game was tested in real-life with a pair of undergraduate IT and electronics students in February 2019. We will therefore provide full information on the creation and running of this game, which lasted 9 hours: * Origin of the project following a computer attack * Equipment required * Preparation time By playing the game, we succeeded in making our students understand why installing a fraudulent executable can easily overturn a security policy. This game, designed in a Fablab spirit, is also intended to be a moment for sharing with the JRES community.
98 - Let’s stamp out phishing
Damien Mascré - David Verdin - Laurent Aublet-cuvelierAbstract
Out of the many evils that e-mail suffers from, phishing is like smallpox: rare, shameful, but unfortunately devastating. However: SPF, DKIM, DMARC, etc. The IETF is not lacking in RFCs to improve confidence in the messaging system. The aim of this multi-layer of several hundred pages is to combat illegitimate messages using the "default mistrust" method. In short, it gives us a free hand to reject messages, even legitimate ones. Since the ARC RFC introduces the notion of selective trust, the relevance of filtering can be further improved. The purpose of this article is to (re)present these RFCs to you and show how they work together. Our community can defeat phishing if: - they are implemented on a large scale; - we converge our filtering policies. Although we have conducted experiments that we would like to share, this article will not give you a complete solution. We want collaboration within the community to begin here at the JRES. Everyone is here, so now is the time to speak. So come along if: * you want to know more about the latest RFCs; * you have ideas about the collective improvement of e-mail; * you want to take part in a collective effort that is not too demanding in a friendly atmosphere; * you are responsible for marketing in an optical fibre box and don't know where to take a nap.
Biographie de l'auteur
Damien Mascré est ingénieur messagerie électronique chez RENATER. Il a 15 d'expérience dans l'administration de messagerie électronique dans l'enseignement supérieur. Il a notamment été reponsable, pendant 8 ans, de la messagerie électronique de l'université Paris XIII. Au sein de RENATER, il est plus particulièrement chargé de l'évolution et de la modernisation de l'infrastructure de messagerie sur laquelle s'appuient l'ensemble des services de RENATER. Laurent Aublet-Cuvelier est ingénieur chez RENATER, depuis plusieurs années. Il est responsable du service PARTAGE (plus de 400 000 utilisateurs fin 2019) et du service anti-spam mutualisé (3 millions de comptes protégés). David Verdin est ingénieur, spécialiste de la messagerie et des listes de diffusions chez RENATER. Depuis douze ans, il a travaillé sur le logiciel Sympa, pour tous les aspects du logiciel : développement, formation, promotion, déploiement et opération de services. Il s'occupe notamment du services de listes de RENATER, actuellement opéré pour une quarantaine de domaines regroupant 350 000 utilisateurs uniques et diffusant 7 millions de messages par mois.
108 - The Services Portal of the National Platform for Digital Trust of the French Ministry of National Education
Jean-michel Lopez - Bruno ReineAbstract
Digitisation and digital trust are key challenges for the public sector, at the heart of the digital transformation strategy led by the State. The actions undertaken by our Ministry are part of the State reform and the modernisation of public action. In 2014, the business needs of the Ministry of National Education, the Ministry of Higher Education and Research and its institutions gave rise to the Digital Trust National Platform, fully operational for the needs of the Ministry of National Education. The presentation will take place in two stages. Firstly, we will present in detail the resources implemented, the technological solutions, the architecture and the organisation that enable the solution to be implemented to meet the business challenges of our Ministry. We will also address the cost and regulatory aspects related to the project and its operation. We will then discuss the new online service for the entire Education and Research community, an offer developed in regard to the needs expressed by our respective CISO and supported by our National CISO. To conclude, we will present the planned changes to the infrastructure, the work carried out with a view to eIDAS certification in order to meet the legal requirements in terms of signature, to improve the overall service and to develop new uses.
145 - Feedback on the setting up of an operational security center
Vincent RibaillierAbstract
In 2019, the Ministry of National Education and Youth (MENJ) and the Ministry of Higher Education, Research and Innovation (MESRI) established an operational security centre. The operational centre for security of Ministerial information systems (COSSIM) operates in so-called synchronous (detection and immediate response) and asynchronous (analysis, qualification, technical measures and feedback) missions. Firstly, we will present feedback on the start-up of this new entity as well as the different phases that were needed to construct it. We will also look at the proposed paths for consolidating interactions between the operational centre and the various security teams in the organisation, with a view to optimising the ability to react to security incidents. Then we will show the role of the COSSIM with the education and research institutions, in particular assistance with managing their incidents. This assistance, which is complementary to that proposed by the RENATER CERT, is focused on qualification and helps with remediation. It is structured around a web-based incident reporting and monitoring service that will enable the RSSI community to connect with COSSIM cyber defence experts.
150 - Automatic phishing campaigns
Denis JoiretAbstract
Phishing is one of the main social engineering techniques used to capture credentials. To raise awareness among its users, for several years Inria has regularly launched fake phishing campaigns. A campaign consists in transmitting to a set of users an e-mail containing a link to a fake site on which the user is encouraged to enter his/her credentials. The solution presented works automatically: selection of users participating in a campaign, sending of e-mails, use of user input on the fake site and processing of data. We will describe the general sequence of a campaign, the operation of the fake site (initial page, pages for returning users following a connection) and the processing that is carried out with the data obtained. Since the data processed is personal data, the GDPR aspect is addressed (allowing users access to their data, anonymisation of results, etc.). Ultimately, the general lessons learned from the results of the campaigns are presented, in particular whether the objective of improving user vigilance through phishing campaigns is achieved.
Biographie de l'auteur
Denis Joiret a fait l'essentiel de sa carrière chez Inria. Embauché en tant qu'Ingénieur Système sur le site de Rocquencourt, il a rapidement été chargé de la conception et de la mise en place d'un réseau Ethernet couvrant le site. A la suite de ce déploiement, une équipe réseau dont Denis a pris la responsabilité a été formée. En tant que responsable de l'Equipe Réseau et Télécom, il a assuré pendant une vingtaine d'années les évolutions et le bon fonctionnement de toutes les infrastructures réseau et de téléphonie du site Inria de Rocquencourt. Denis Joiret a ensuite choisi d'orienter sa carrière vers la sécurité informatique en intégrant en 2011 la Cellule Sécurité de la DSI d'Inria. C'est dans le cadre de cette fonction qu'il a été amené à développer le système de campagnes de phishing. Il travaille désormais au sein du SOC (Security Operations Center), service nouvellement créé à la DSI d'Inria en remplacement de la Cellule Sécurité. Denis Joiret assure également la fonction de RSSI suppléant.
8 - Scripted installation of teaching PCs in Fedora28
Frédéric Amrein - Pierre-philippe ChaponAbstract
Our automated installation system for teaching workstations at Polytech Clermont-Ferrand is based on a very large-scale approach to deployment. Our idea is to offer a reliable, optimised, simplified service for room installations and reinstallations. We therefore know how to meet the school's teaching needs best. Our PCs are installed under Fedora Linux distribution via a PXE boot and a “kickstart” script. This script offers a highly customised installation of the system, broken down into three phases: * Definition of operating system installation parameters. * Definition of the packets to be installed and those to be deleted. * Application of post-installation scripts enabling advanced system customisation. Subsequently, regular maintenance tasks such as new packet installations will be performed in block mode via ParallelSSH. This system meets a very large number of teaching needs, but not all: students need some native Windows applications. So we also distribute a Windows virtual machine to all users connected via a Bitorrent server.
76 - Confessions of a working group: seven crucial steps to finding a unified OS and software deployment solution
Laurent Granier - Emmanuel Lestrelin - Frederic Bloise - Bernard BerenguierAbstract
Following the merger of the three Aix-Marseille establishments, the different practices in the asset management methods at AMU revealed the dual need to harmonise practices and standardise the asset management procedures. AMU has set up a working group consisting of personnel from the Information System Operational Division (DOSI) including different geographical and technical backgrounds. The study lasted one and a half years and involved seven successive stages, seven ultimate temptations that led the working group towards the revelation of divine wisdom: - Statement of need: Envy - Market overview: Gluttony - Usage overview: Sloth - Initial filtering of known solutions: Greed - Functional tests: Lust - Choice of one or more ideal solutions: Pride - Proposals for implementation scenarios: ... Each of these stages will be presented in detail, then the outcome of the study will be presented and commented on.
78 - Efficient deployment of a software offer on an academic scale
André Rivoallan - Moncef Ziani - Matthieu Terre - Jean-baptiste FaucheronAbstract
The Information Systems and Innovation Department of the Rennes Regional education authority manages over 60,000 workstations spread over 300 different sites and as many independent domains (Samba or Active Directory). Under the Peillon Law of 2013, which modified the respective scopes of the State and local authorities, and a tightly constrained budgetary framework, the teams at the Regional Education Authority have refocused their maintenance activities on engineering, by centralising the management of workstations, automating their commissioning and delegating their deployment. An initial study specified the standard configuration of the Education Authority's Windows 10 workstation, with variants for teaching and administrative positions, as well as a "disability" component. The application offering has been explained through software catalogues (educational, administrative and “disability”) for primary schools, lower secondary and upper secondary schools, each run by a dedicated Project Manager. The presentation will detail the organisational and technical aspects of the project, which means that users of the Rennes Education Authority are now given: * a consistent software offering, displayed to users; * improved security of workstations, through their continuous development (WSUS / WAPT); * better monitoring of machines (GLPI / FusionInventory, installation of remote applications); * better quality of service (better support through uniform workstations and delegation to users); * implementation of academic policies (software catalogues, security policy); * cost control (free or open source tools).
122 - WAPT or how to easily deploy
Florent Fareneau - Jean-luc Petit - Benjamin BurnoufAbstract
Since 2013, the University of Valenciennes has reviewed its software deployment and maintenance solutions for its entire stock of machines. Before this date, the machines were deployed using masterised images that required significant storage capacities. Software adjustments had to be made on a regular basis directly on the client workstation. Monitoring of updates to these workstations became problematic (security patches, different versions deployed, etc.). Our needs: facilitate deployment, management and upskilling; centrally manage all sites and financial aspects for over 2,000 client workstations. A state of the art report on solutions for the deployment of operating systems and software was carried out: Dell Kace, SCCM, UpdateEngine, Landesk, GPO, OCS, WPKG. We decided on the combination of MDT (Windows OS deployment) and WAPT (software deployment). WAPT's strengths: an open source product, a simple-to-use console, a software package library, a rapidly evolving application, a solution supported by a French company, a system for updating in push and non-blocking mode, etc. The solution must allow users to delegate user rights to the local team in order to create, deploy and stabilise the packages. In this feedback we will cover the expression of our initial needs, our studies of deployment solutions, our choice of this software, our successes and limitations in use over the last five years and our prospects for the future with an API under development or a self-service application, etc.
Biographie de l'auteur
De formation informatique généraliste, Florent Fareneau a intégré l’équipe en charge des aspects hébergements, systèmes et réseaux du service informatique central de l’université de Valenciennes et du Hainaut cambrésis en 2003. Depuis 2005, il assume la fonction de responsable de la sécurité des systèmes d’informations, entre 2013 et 2017, il se voit confier la responsabilité du pôle « infrastructure de services et réseaux » de la direction des systèmes d’information, composé des collègues ingénieurs et de l’équipe des techniciens informatiques. C’est sur cette période que le projet de modernisation des outils de déploiement a été lancé avec les différents impacts sur les aspects transversaux des domaines réseaux et systèmes. Depuis janvier 2018, Florent Fareneau est directeur adjoint délégué aux systèmes d'informations au sein de la Direction des Systèmes d'Informations de l'université polytechnique Hauts-de-France. --------------------------------------- Diplômé en 2015, Benjamin Burnouf a intégré la Direction des Systèmes d’Information (DSI) de l’Université Lille 1 en tant qu’Ingénieur Système. Puis a participé au regroupement des 3 Universités Lilloise. Il a pendant cette période travaillé à la mise en commun des outils et à l’élaboration des nouvelles méthodes de travail avec les collègues des différents établissements. Benjamin Burnouf est actuellement agent au sein de la DSI de l’Université Polytechnique Haut de France, ex-Université de Valenciennes et du Hainaut Cambrésis, depuis environ 1 an. Il a intégré le pole infrastructure avec les missions suivantes : • l’administration de l’infrastructure d’hébergement, • le maintient en condition opérationnelle des services aux utilisateurs, • la modernisation des outils de l’établissement. ---------------------------------------- Suite à un diplôme d'ingénieur, Jean-Luc Petit intégre en 1987 le Laboratoire d' automatique industrielle et humaine (LAIH) de l'université de Valenciennes, associé au Cnrs. Durant 10 ans , il participe avec le service informatique de l'université au développement des infrastructures réseau et système . En 1998, suite à un concours , il intègre le service informatique qui deviendra la Dsi de l'Université de Valenciennes. Depuis il assure les fonctions d'ingénieur système et réseau avec des missions diverses et variées .
130 - Manage workstations over Internet with Microsoft SCCM
Yves Daniou - Julien MercierBiographie de l'auteur
- Yves Daniou est en poste au rectorat de l'académie de Grenoble depuis 2007. Après avoir travaillé comme technicien d'exploitation au sein du service de Gestion de Parc, il en a assuré la coordination technique quelques années en tant que gestionnaire de parc informatique. Depuis 2015, il est administrateur systèmes et réseaux, dans un contexte technique plutôt orienté vers Linux et les logiciels libres. Son travail sur Microsoft SCCM s'est inscrit dans le cadre d'un stage et d'un mémoire d’ingénieur du Conservatoire National des Arts et Métiers (CNAM). - Julien Mercier gère et administre les solutions de déploiements au service Gestion de Parc de la DSI de l’académie de Grenoble. Il travaille principalement sur l'automatisation des tâches d'installation et de configuration d'applications et de systèmes d'exploitation Windows. Plus familier des outils Open source tel que OCS et GLPI et des langages de scripts comme autoit, batch ou Powershell, il se forme aujourd'hui à l'utilisation de Microsot System Center Configuration Manager comme outil central de gestion de parc.
137 - VDI , an asset for teaching ?
Jonathan Staimphin - Regis Khamchanh - Thomas FourezAbstract
2012: The three universities in Marseille merged into an immense establishment of 78,000 students and 10,000 staff, divided into five campuses spread over several cities. The objective, which was unrealistic at the time and is ambitious today, is for teachers and students to be able to find the same service and working environment on any site. Any class must be able to take place from any room! The management of the pool of teaching computers has become very cumbersome: over 5,000 workstations with a large number of software applications to maintain, offering little flexibility and requiring a lot of energy from the IT teams. How can we optimise this work to achieve this objective? How can we enable them to spend less time on repeated tasks and, above all, how can work done on a campus be easily reused for the entire establishment? To address this issue, we have turned to Virtual Desktop Infrastructure (VDI) with the HORIZON VMWARE solution. This article presents the approach that led us to embark on the VDI experience. We therefore propose to focus on the different aspects of this project: choice of the VDI solution, business model, choice of infrastructure, organisation of teamwork, problems encountered (of all kinds). Above all, what can the VDI contribute to teaching? More generally, we want the community to benefit from our feedback in this implementation phase, and share our successes, doubts and the challenges that are still to be overcome.
Biographie de l'auteur
Tout d'abord gestionnaire de parc pédagogique à l'université de Lille 3 en 2007. Jonathan quitte le houblon pour les oliviers et poursuit sa carrière bien plus au sud à l'université d'Aix-Marseille 3 qui devient Aix-Marseille Université en 2012. Il se spécialise dans le domaine de l'infrastructure serveurs, hyperviseurs et stockages. C'est en tant que chef de projet du déploiement du VDI qu'il présente cet article aujourd'hui. Aujourd’hui gestionnaire de parc à la faculté de médecine de l’université d’Aix-Marseille, Thomas a débarqué des plages de Cannes il y a 3 ans et participe au projet VDI depuis ses débuts sur la cité phocéenne. Il milite avec ferveur pour l’harmonisation des masters et des méthodes de travail, un autre grand sujet… Régis part en croisade avec l'équipe VDI afin de libérer la pédagogie de ses contraintes physiques. Avec son heaume de gestionnaire de parc il apportera son soutien sur les questions V(irtual) et D(esktop). L'histoire raconte qu'il développera des aptitudes I(nfrastructure) lui permettant d'être adoubé par ses pairs.
141 - PDQ Deploy/Inventory
Laurent Chieppa - Loic LeforestierAbstract
Summary The issue of managing a large stock of computers is a classic one. The machines are up-to-date when commissioned but monitoring of software updates and distribution of new configurations are major actions. Admin Arsenal’s PDQ software suite helps us manage our computer assets as efficiently as possible. With PDQ Inventory, we have real-time monitoring of the inventory of a computer workstation and we know the software installed, the configuration of the workstation (hardware, IP address, available storage capacity, etc.). It also allows us to uninstall software, enable Wake on Lan, etc. It is possible to make collections of machines, either static (manually grouped by the technician), or dynamic, depending on filters (IP range, software version, etc.). PDQ Deploy is used to deploy applications remotely in a silent manner, to keep the installed software up to date. We also use it when reconstructing our teaching rooms. The installations are done transparently for the user, silently, and are performed in several steps (uninstalling the old version, rebooting, file copying, installation, execution of a software customisation script, etc.). Sometimes we also schedule installations (when the machine starts up, every lunchtime, as soon as the machine is visible from the network, etc.). We chose this solution because it offers greater flexibility in implementation and management than the complete SCCM solution.
Biographie de l'auteur
Chieppa Laurent 38 ans, Entré dans la fonction publique en Février 2004 en tant que logisticien d’enseignement, j’ai rejoint le CRIP de la faculté de Médecine et Pharmacie début 2005 dépendant de la DSI de l’Université Joseph Fourier. Suite à la réussite du concours de technicien, j’ai rejoint la DSI de l’Université Pierre Mendes France en 2012. En 2016 les universités de Grenoble fusionnent et j’intègre l’équipe de gestion de parc de la DGDSI de l’Université Grenoble Alpes. Je participe à des projets visant harmoniser les méthodes et outils pour la gestion de parc et met en place l’outil de déploiement pour l’ensemble des ordinateurs de l’université de Grenoble, MDT. Je participe aussi à d’autre projets comme la mise en place de l’outil de télé-déploiement d’applications PDQ Deploy et Inventory, puis au groupe de travail pour la gestion des GPO autour de l’AD de l’UGA. Enfin j’intègre le Comité Technique SUMMER (Stockage Unifié Mutualisé Massif Evolutif et Réparti).
162 - osquery : is there anything wrong with my OS ?
Mickaël MasquelinBiographie de l'auteur
Mickaël MASQUELIN (mickael.masquelin@univ-lille.fr) CRIStAL – UMR CNRS 9189 ====== Présentation rapide ====== J’ai intégré le CNRS en 2004 dans le cadre d’un concours externe après avoir suivi une formation initiale plutôt tournée vers l’informatique (en l’occurrence développement d’applications) et l’utilisation des nouveaux médias de communication (web). Après avoir successivement occupé les fonctions de Gestionnaire d'Infrastructure, d'Administrateur Systèmes et Réseaux et de Responsable Technique de Pôle j'ai rejoint, en juin 2019, le "Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL)". L'essentiel de son effectif (environ 500 personnes) est regroupé sur le site du campus Cité Scientifique de l'Université de Lille. Je suis affecté au Pôle Informatique et Technique en qualité d'adjoint au responsable du service. Mes principales activités consistent à proposer puis mettre en oeuvre ou piloter le déploiement des projets d'architecture fonctionnelles et techniques pour le Système d'Information de l'Unité. J'essaie, avec mon équipe, de travailler à la mise en place de tout l'outillage DevOps (Terraform/K8s/Docker/Vault) pour nos usagers afin de leur offrir un SI performant, scalable et modulaire. En marge, j'aime le cloud, j'aime le libre, j'aime l'automatisation, j'aime le partage, j'aime les termes compliqués, j'aime les conceptions épurées, ... Bref, j'aime DevOps :-)
19 - Mutualised platforms in Grenoble : Virtualization & Storage - Automating & economic model
Guenael Sanchez - Mathieu PanelAbstract
For the last five years, the University of Grenoble Alpes has been working on shared platforms around data storage (SUMMER), virtualisation (WINTER) and the core network for Data centres (SPRING). These platforms are very popular in the academic community. SUMMER therefore exceeds 3 petabytes of storage, WINTER is approaching 1,000 virtual machines hosted and SPRING now has 1,700 network ports, representing a hundred different rightholders, each with delegated access to the business tools. This ramp-up brings significant challenges for the teams in charge. How to formalise, facilitate and automate the various business processes? Provisions for storage space or virtual machines, including network configuration, delegation of access to services, hierarchical validations, invoicing or monitoring of rental contracts. How can we make sure we don’t forget anything, avoid onerous tasks and monitor requests over time? Through the partnership between the technical and administrative teams, we will present to you the tools put in place to streamline the different procedures and offer relevant dashboards. We will also address the human organisational aspect, detailing the roles and functions of the different stakeholders, from users to administrative staff, including the various technical teams. Using BonitaSoft technologies, the APIs of the Vmware, Netapp and Cisco tools, and the tools from our BIPER reference framework, we will look at how the dialogue between these different components was built, guided by the BPMN modelling of the business processes.
Biographie de l'auteur
Guenael SANCHEZ : Tombé dans l'informatique quand il était petit, guenael est ingénieur en informatique à l'Université Grenoble Alpes. Depuis la fusion des Universités, il participe activement à plusieurs projets mutualisés du bassin Grenoblois. Trois de ces projets sont devenus réalité. SPRING qui fournit aux DataCentres grenoblois une fabrique IP et un réseau très haut-débit, SUMMER qui propose du stockage de données et WINTER plateforme de virtualisation mutualisée. Mathieu PANEL : Ingénieur, développeur d'application à l'Université Grenoble Alpes. Il participe à l'évolution et à l'interconnexion des différentes briques du système d'information. Il gère la plateforme d'automatisation des processus métiers de l'établissement. Dans le cadre des projets WINTER (plateforme de virtualisation mutualisée) et SUMMER (stockage de données) il développe les outils d'automatisation.
23 - Rspamd : free antispam software, efficient, scalable and customizable
Gauthier CatteauAbstract
The demise of e-mail has been announced for many years, but we can see that in 2019, e-mail has never been as present and essential. However, phishing messages are becoming increasingly pernicious, the number of spam e-mails has exploded and users have become more demanding in terms of the quality of filtering and the message routing time. Today, e-mail is only possible thanks to the effectiveness of our anti-spam solutions and the people who administer them. I decided to introduce you to Rspamd free software, which came into being 10 years ago, but which has remained unknown for a long time. Rspamd is an effective, efficient solution which is easily scalable, simply by adding nodes. It elegantly addresses almost all the filtering issues I have encountered over the last 20 years. And if there's something missing that you want, it's easy to write extensions in lua language to expand these features.
Biographie de l'auteur
Actuellement responsable de l’équipe système de la Direction des Systèmes d’Information de l’académie de Lille, Gauthier Catteau est administrateur système depuis presque 1/4 de siècle. En plus de ces fonctions académiques, Gauthier participe à plusieurs groupes de travail au sein du ministère de l’éducation nationale sur divers thématiques d’infrastructure. Il est également membre du comité de pilotage du réseau métier régional des Hauts de France Min2rien depuis 8 ans.
27 - Around the SYMPA software
Benoît MarchalAbstract
Sympa is a complete, configurable mailing list manager. In a large university (10,000 people, over 62,000 students), even the naming of lists is difficult. It must be made easier (creation, pre-defined list type, etc.) both for novices and for experienced administrators. Consistency and automatic population are necessarily linked to the information system. Automatic reminders to 'avoid' validation or authentication omissions, etc., special lists for writing specific scenarios, etc. Finally, a number of routines and configurations that are adapted to our structure in varying degrees. So how can this be achieved? At the University of Lorraine, we have written dedicated interfaces, periodic scripts, made adaptations using the software possibilities (Custom Conditions, scenarios, filters, etc.). We have over 5,000 lists for staff, 2,000 lists for students and a few more for partners managed on our various robots. This presentation aims to give an overview of what we have achieved. Why not share it with others?
Biographie de l'auteur
Ingénieur de recherche à l’université de Lorraine (UL), Benoît MARCHAL travaille à la direction du numérique, sous direction infrastructure et services. Responsable d’une équipe chargée de l’exploitation des serveurs, il s’occupe plus spécifiquement sur la sauvegarde, l’infrastructure des serveurs web ou vidéo. Depuis de nombreuses années, il gère les listes de diffusions avec le logiciel SYMPA, d’abord dans une école d’ingénieurs, puis au niveau de l’INPL (Institut National Polytechnique de Lorraine) et depuis la fusion des quatre universités de Lorraine, sur les listes de l’UL. Il en est le référent. Il a apporté quelques traductions et propositions dans le projet communautaire. Autour de SYMPA, l’université a développé tout un environnement lié au système d’information pour en faciliter la gestion des listes par les usagers.
35 - Dare to Kubernetes!
Rémi CailletaudAbstract
Kubernetes has a reputation for being a complex system, only useful in the context of substantial infrastructure. It is often said that running applications statefull is complicated or even reckless. And yet... Kubernetes definitely brings new concepts, but is nothing more than an orchestrator of containers drawing on the known, proven functionalities of the Linux kernel. Using the example of modelling and then the production of Kubernetes clusters at the Grenoble World Science Observatory (OSUG), we will demonstrate that this tool is neither opaque nor magical, and that it is suitable for organisations of all sizes, both for system administration tasks and in the context of DevOps practices. We will see how it facilitates teamwork and programmable infrastructure through continuous integration and deployment. We will justify the choice of Kubernetes, detail its concepts and then explore its internal operation and API. We will then present the technical choices we have made for provisioning and the deployment of clusters, then the infrastructure choices for the clusters themselves: load balancing L2, reverse proxy, provisioning dynamic volumes, metrics, monitoring and alerts. Finally, we will discuss the good practices that the tools allow us to implement. The declarative configuration of the clusters allows us to move towards a programmable infrastructure and to develop the GitOps methods. We will briefly present these methods and the tools used to implement them.
Biographie de l'auteur
Rémi Cailletaud est responsable technique à l'OSUG et travaille une partie de son temps pour les projets mutualisés de l COMUE UGA, en particulier sur le projet Nova que Nicolas vient de nous présenter.
36 - Cloud infrastructure for scientifics microservices traitments (Openstack, Kubernetes and Docker)
Mohammed Khabzaoui - Yvan StroppaAbstract
As part of our development of scientific projects and the development of research such as Automathon (real time), ExecAndShare (scientific calculations) or our researchers network platform (big data) and as part of the provision of resources for the Mathematics community and the University of Lille, we needed a cloud-type infrastructure that was able to host all these applications made up of micro-services, capable of offering high-performance services. Services offered by this infrastructure include continuous integration and deployment services (CI/CD), automated resource sourcing services (containers, workspace) and orchestration service. All this is integrated into a secure environment that guarantees service continuity (HA, replication and monitoring). The orchestration service is built on a Kubernetes infrastructure based on Docker and hosted in an Openstack type cloud space. In this way, it can obtain supplies from its private cloud, which can be used as a reservoir of resources which it can draw from as it wishes, in an automatic but controlled manner. Our communities' centralised IT resources are very unequal. When developing IT projects of a scientific nature, we often have to use an implementation solution that is external to our institutions (costly and difficult to maintain in the long term). For that reason, we have set up this infrastructure to meet the needs that emerge from scientific projects.
60 - Kubernetes, the PC-Scol/Pégase operating system
Vincent Hurtevent - Thomas Montfort - Raymond BourgesBiographie de l'auteur
====== Vincent Hurtevent ====== Il a découvert l'ESR à Poitiers au Laboratoire d'Etudes Aérodynamiques. Il a ensuite travaillé 10 ans au sein de la DSI de l'Université Claude Bernard Lyon 1 en tant qu'admin système et responsable d'infrastructures pour les services numériques, l'appui à la recherche et les applications métiers. Motivé par l'ambition de PC-Scol et l'envie de fournir des services métiers innovants et performants, il intègre le projet PC-Scol début 2018. Depuis, en tant qu'Ops, il met en oeuvre l'infrastructure et l'outillage nécessaire à la construction du logiciel PEGASE et travaille à son intégration dans un environnement Kubernetes. ====== Raymond Bourges ====== En 1993, il entre au service d’informatisation de la gestion de l’université de Rennes 1 comme DBA de la nouvelle application de gestion de la scolarité : Apogée ! Il fait ensuite quelques développements périphériques et monte les premiers intranets. En 2000, il rejoint ce qui n’est pas encore la DSI pour mettre en place un LDAP alimenté par les applications de gestion, participe à la première norme Supann et contribue activement à la mise en place du consortium ESUP-Portail dont il devient le coordinateur technique. En 2012, il met en place une équipe de développeurs qui travaillent en mode agile (Scrum). En 2014, il est mis à disposition de l’association Cocktail pour finalement prendre le poste de directeur technique du projet PC-Scol fin 2017. ====== Thomas Montfort ====== Il a fait ses premiers pas d'administrateur système au sein de l'INSA de Toulouse en 2001. Il y a passé plusieurs années à cultiver sa passion de la ligne de commande et des pingouins. Ensuite il a décidé de gravir les échelons et les sommets alpins en intégrant le SIMSU à Grenoble en 2011. Il y a découvert les exigences de la mutualisation multi établissements et les installations d'Apogée. Il a donc regardé du côté de l’ambitieux et novateur projet PC-Scol pour relever de nouveaux défis. Les technologies récentes, les équipes réparties physiquement et l’utilité du projet l’ont motivé à intégrer l’équipe Ops en 2018.
64 - Nova - A rainbow cloud over the Alps
Nicolas Gibelin - Rémi Cailletaud - Gabriel Moreau - Jean-françois Scariot - Gabrielle Feltin - Anthony DefizeAbstract
A pooled and shared on-demand Infrastructure as a Service (IaaS), based on the Openstack software suite, was rolled out on the Grenoble university campus in 2018 and updated in 2019. We present the methods used to deploy and manage the infrastructure: racadm and preseed for basic system installation, then Kolla for Openstack deployment. This latter solution, based on containers for each service, enables a centralised and logged configuration (GitLab) of controllers and calculation nodes. The solution is the benchmark solution for a reproducible deployment of Openstack. We have been able to expand our cloud easily with new nodes. The change in version of the basic OS was also successfully tested despite a few small hitches... As security is a key element in the proper operation of this type of shared service, each project has been made watertight and its data perfectly isolated from other projects, thanks to the encryption of all network flows in VXLANs. This OpenStack infotainment platform is operational. What is it all for? For example, our first users use the Jupyter Notebook through the provision of Jupyterhub servers (web portal); the Distributed Health Assessment IT System (SIDES project); the continuous integration in connection with the GitLab platform; the test for the Kubernetes container scheduler or the calculation and visualisation software, etc. Highly varied uses that other platforms had difficulty offering. Nova, a new platform, was born.
Biographie de l'auteur
Nicolas Gibelin est ingénieur système réseau et développement à L'UMS GRICAD (Grenoble Alpes Recherche - Infrastructure de Calcul Intensif et de Données) créée pour répondre aux enjeux des besoins scientifiques actuels en matière de calcul intensif et de données. Il est impliqué dans les infrastructures réseaux et serveurs, responsable entre autres des services de Notebook et de la plateforme de cloud Nova.
71 - E-mail service in university: si vis pacem, para bellum
Daniel Le Bray - Quentin DesreAbstract
Despite the emergence and growing importance of new communication tools, e-mail is still widely used by academic and scientific communities. To face up to the risks and meet the needs and requirements of availability, reliability and resilience, we decided it was necessary to rethink our service architecture and integrate it into a sustainable and scalable approach. Although the analysis we have conducted is based on strategies (planning, anticipation) and tactics (what actions should we take? How do we react?), we have not immersed ourselves in Sun Tzu’s writings, but in the good practices and experience gained over time. The security of a server or the availability of a component are not isolated points; we have redesigned our entire messaging service through a global approach, so that it can best meet expectations in terms of use, operation and development. We have therefore studied, tested and selected several technical solutions to address the multiple forms covered by this service. Today, this approach means we can respond to the various aspects of the usage policy (CAA, SPF, DKIM, DMARC), the access policy (SASL, Postgrey, Postfwd), the separation of flows (different access perimeters, SMTP exchanges, mailbox manipulations), host filtering (IPTables, Fail2Ban), the protection of exchanges (Rspamd, SpamAssassin, ClamAV), the availability (physical and virtual servers, KeepAlived), the monitoring and operational maintenance of processes (statistics, logs, Monit).
Biographie de l'auteur
Daniel Le Bray et Quentin Desré sont tous deux ingénieurs d'études à l'université Le Havre Normandie au sein du pôle Systèmes & Réseaux du Centre de Ressources Informatiques. Assurant tous les deux les fonctions d'administrateur systèmes, Daniel Le Bray est en poste depuis 1997 et Quentin Desré a rejoint l'équipe en 2011. C'est en binôme qu'ils exercent leurs missions de gestion quotidienne de l'infrastructure de virtualisation, des solutions de stockage et de divers services numériques pour l'établissement. Parmi les nombreux projets dont ils sont en charge, ces dernières années ont surtout été marquées par l'intégration de la virtualisation et l'évolution importante du service de messagerie de l'établissement.
111 - Aful participation to the free-libre software stand
Jean Thiery - Jean-yves Jeannas121 - FG-iRODS: a data management service for national and international scientific communities based on a federated infrastructure.
Jérôme Pansanel - Catherine Biscarat - Raphaël Flores - Pierre Gay - Christine Gondrand - Emmanuel Medernach - Patrick Moreau - Vincent Nègre - Geneviève RomierAbstract
For several years, all scientific disciplines have been faced with a deluge of heterogeneous data, both in terms of acquisition (field collection, experimentation, modeling, simulation, etc.) and formats. Simply providing sufficient storage resources is no longer adequate for new scientific use-cases: these resources need to be exploited now and in the future; the extension of the volume, storage and sharing of data must also be designed from the outset of projects to comply with the principles of FAIR. In order to support and meet the needs of researchers, several partner laboratories at France Grilles have developed expertise in the iRODS software since 2012 and set up a federated infrastructure based on geographically distributed resources to provide a data management service called FG-iRODS. This shared service, sized to host small and medium-sized projects (up to a few hundred terabytes), can: * process large volumes of data, potentially distributed on several sites with heterogeneous infrastructures and hardware; * provide physical file organisation that is transparent to users and closely manage their file access rights; * search for data by metadata queries and facilitate the management of large data collections; * provide remote access to data (command line, web interface, API, network sharing). The article will detail the objectives of the FG-iRODS project, the new hardware infrastructure and software, the service offering and a specific use case.
128 - Automation and delegation of processing with Rundeck
David Chochoi - Guillaume LavilleAbstract
In current information systems, services are increasingly distributed on numerous servers, both physical and virtual. How can you obtain a summary view of the scheduled tasks and their execution result? How can you delegate certain sensitive actions in an isolated, secure and traceable manner? How can you easily automate scenarios and save on handling, without compromising security? Rundeck is a solution to all these issues, tested and implemented within the Dijon Regional education authority (approximately 300 servers for 46,000 staff). With this tool you can define groups of servers, works, users and access rules to manage the daily operation of an IT infrastructure. It also provides a REST API so it can be integrated into third-party applications that already exist or are created to meet specific needs. There are five main elements in this long presentation: - The factors that led to this analysis and the choice of Rundeck, in the context of an academic IS in production; - A presentation of the Rundeck tool, its functionalities and technical characteristics; - An illustration of how it can be integrated into existing work processes using three examples; - The use of the Rundeck REST API to launch tasks and monitor their status in locally developed applications; - Securing of servers, works and access rules, in relation to the academic RSSI. To conclude, it focuses on the additional changes that Rundeck can bring to an institutional structure such as the Regional education Authority.
Biographie de l'auteur
David Chochoi est membre de l'équipe système de la DSI du rectorat de Dijon et a pour principales responsabilités l'installation et la maintenance des applications du ministère de l'Éducation nationale au niveau académique mais aussi celles de la messagerie, l'authentification, la fédération d'identité et la sécurité applicative. Guillaume Laville, après une expérience significative dans le calcul scientifique, travaille désormais dans la même équipe que David. Ses responsabilités incluent l'hébergement des développements réalisés par l'académie, des sites web d'établissements et de communication, ainsi que la gestion des plateformes de travail collaboratif (OnlyOffice, Owncloud, serveur de médias, projet académique). Il intervient également lors de la validation de sécurité des nouvelles applications du SI.
129 - Enough with cron !
Pierre GambarottoAbstract
The historical method of programming the execution of a task is to refer to a clock, which is exactly what Cron does. System administrators all undergo the initial test of creating or managing a line in a crontab, and the widespread knowledge of this tool partly explains its intensive use. Cron triggers a fixed-date script: a time event. However, the current systems are spread out, and we need to be able to coordinate tasks on several servers. For example, the creation of an individual’s IT representation in an HR software must generate the creation of login credentials in the LDAP or Active Directory, then create a file account and an e-mail account on two other servers. The events in question are therefore no longer related to a clock, but to steps of a software process: the end of a task on one server must trigger the start of another on another server. Such architectures are called asynchronous. The tools presented enable system administrators to set up and manage work sequences on asynchronous architectures. Two techniques are presented, each developed around an example: - remember that in Unix, everything is a file: inotify from the Linux 2.6.13 kernel allows you to react to the events that punctuate the life cycle of a file: creation, modification of content, modification of metadata, deletion; - using git hooks: publishing a new version of files from the git repository leads to the execution of a task.
Biographie de l'auteur
D'abord développeur et formateur dans le privé, Pierre Gambarotto a développé ses compétences d'ASR et sa barbe à l'ENSEEIHT (N7) dans le département sciences du numérique. Il est maintenant responsable informatique de l'IMT (Institut de Mathématiques de Toulouse) et formateur dans le DU développeur d'applications fullstack, à Toulouse INP Formation Continue. Ses centres d'intérêt sont la programmation fonctionnelle, l'automatisation et les architectures système.
132 - Which storage solution for the next decade?
Alain Heinrich - Sébastien Finkbeiner - Laurence Moindrot - Simon Piquard - Xavier PierreAbstract
Since 2014, the University of Strasbourg has chosen to deploy the "Seafile" cloud storage solution for all its staff and students. While this solution is entirely satisfactory in terms of simple storage and sharing of individual documents, we realised that it had several limitations. Seafile is also rarely used in other Higher Education and Research establishments in France, which makes exchanges with the community difficult. In recent months, we also noted a marked expectation among our users for more functionalities related to storage, in particular collaborative work solutions and exchanges around storage, with their colleagues in the laboratory and establishments and also their colleagues from other partner establishments in France and worldwide. We have initiated a new study to develop our storage service and to try to find a solution that meets new needs. There are several options: * continue to invest in Seafile; * await the initial results of the solution proposed by Renater; * launch a community project with other Higher Education and Research institutions. The aim of this poster is to discuss the outcome of our study with the community and also to initiate a more general reflection on the partnerships of the future, including: * a state of the art report of the solutions in 2019; * an assessment of the functionalities requested by our users; * a mapping of the solutions deployed in Higher Education and Research (Nextcloud, OneDrive, Seafile, etc.); * a state of the art report on interoperabilities between current storage solutions.
135 - Reproducible System Administration with GNU Guix
Julien LepillerAbstract
Have you ever had an update that made your system unusable? An installation that didn’t behave exactly as you expected? The impression that you could do nothing but reinstall everything to return to a correct state? Would you dare to type Ctrl-C in the middle of an upgrade? Have you ever managed to rebuild exactly the same system twice? To understand the difference between two systems? GNU Guix is a transactional package manager that can be used on any existing Linux-based system or as a standalone distribution. We will see that Guix is supported by three main properties to provide all of its benefits: transactional updates, bit-to-bit reproducible packages and a centralized, integrated, declarative stateless configuration mechanism. GNU Guix can also manage temporary environments like VirtualEnv does and long-term environments like apt-get does, in the form of simple profiles, containers, virtual machines or complete operating systems, while maintaining the declarative, reproducible and transactional approach every time. We will see how Guix can help you manage and keep different services and software under control, and how it can help system administrators as well as scientists who want to reproduce the work of their colleagues. Among Guix’s possibilities, we will see how software environments can be obtained in a reproducible fashion, without the inherent opacity of “container images”, while remaining interoperable with Docker or Singularity.
166 - HPC Singularity: share and improve repeatability of your simulations
David Brusson - Michel Ringenbach - Vincent LucasAbstract
The scientific approach is based on the reproducibility of experiments. For experiments requiring significant computing resources, the use of a supercomputer becomes necessary. This raises the problems of differences in architecture, system, libraries, and communications between calculation nodes, which are complex notions to be grasped. The resolution of these issues has a dual interest: • Easily replay the simulations and consequently allow validation of the search results; • Open the calculation resources to a greater number of users, thanks to simplified use. To meet these needs, since 2019, the University of Strasbourg has offered a container service within its mesocentre, based on Singularity technology. These containers include both the operating system, specific libraries, and the compiled scientific software. This therefore makes it possible to respond to most of the problems of reproducibility and ease of distribution. In this article, we present container technologies, the benefits of Singularity in an HPC environment, and its interoperability with Docker. We then detail the architecture as well as the performance obtained. The conclusion summarises these functionalities and presents the changes in expected uses: real ease of use with simulations that can be easily reused and improved.
167 - Development of the configuration management and orchestration tool at the University of Strasbourg
Ludovic Hutin - François MénabéAbstract
Ansible is a free tool for configuration management and multi-system orchestration. Simple and accessible, it has gradually established itself as a market standard. Used at the University of Strasbourg, Ansible has become a prerequisite for the deployment of new applications. Historically, several tools have been used to manage the configurations (or even orchestration) of our various infrastructures and applications. For the past two years, we have been converging our tools into a single one: Ansible. Firstly, we will show the project approach used to carry out this complex and time-consuming transformation, which requires the involvement and training of all stakeholders and, above all, standardisation of practices. We will then show the prerequisites necessary for the consistent deployment and production deployment of VMs, integrated into the information system and secured. To do this, we have developed several Ansible modules and now support a significant number of OS (Windows, CentOS, RedHAT, Ubuntu). Our VM deployment playbook has been adapted to integrate it into Ansible Tower, so that all colleagues can create VMs with all the basic configurations. Finally, we will present Ansible Tower. This tool has helped us to improve quality and efficiency and to delegate complex tasks to all our colleagues. We have developed a dynamic inventory plugin enabling us to interface it with GLPI, offering us the possibility of executing bulk actions on a set of servers according to advanced search criteria.
Biographie de l'auteur
Ludovic Hutin de la direction du numérique (DNum) est responsable du pôle « Plateforme cloud et intégration » au département "Infrastructures" de l’Université de Strasbourg. Ce service a pour principale mission le déploiement et le maintien en condition opérationnelle de l’intégralité des moyens d’infrastructures d’hébergement de l’établissement (serveurs virtuels, bases de données, hébergements web, etc.). Le pôle contribue aussi au déploiement des applications opérées par les différents services de la DNum. François Ménabé exerce son activité au sein du même service que Ludovic. Ses responsabilités d’administrateur système le font intervenir plus particulièrement sur les infrastructures de stockage, de sauvegarde, de virtualisation, d'authentification et depuis peu, il s’implique sur des sujets liés à de l’intégration applicative.
5 - End to end application testing - example with GLPI (IT assets management software)
Cédric Villa - Denis BuffenoirAbstract
Have you ever needed to test an application after a version change? Have you ever broken an application after a version or configuration change? In a context of rapid, regular developments in applications, each change introduces a risk of regression to the processes. However, these developments are often necessary and expected by users. Testing every change is onerous and repetitive, takes time and may introduce errors. Correcting a fault often results in other malfunctions. Tests performed in an automated, reproducible manner guarantee the conformity of the result. Automation enables instant feedback and the earliest possible confirmation of no regression. An “agile” testing strategy, in a DEVOPS approach, has led us to develop these tests easily and quickly. The more tests we produce, the greater the scope of possible regressions covered. The tools we have used are simple and accessible. Continuous integration has meant we can automate the execution of these tests, in an environment similar to production. It has also made it possible to generate reports for use by the project members. What we have done in our testing for the GLPI application can be used for other applications in our information system. This feedback and its implementation can provide all the foundations of a solution adapted to your applications.
Biographie de l'auteur
Cédric Villa est un ingénieur DevOps originaire de Nancy où il a travaillé au sein du service informatique du LORIA (Laboratoire Lorrain de Recherche en Informatique et ses Applications) de 2005 à 2017. Il a développé et maintenu une partie des outils collaboratifs et institutionnels du laboratoire (gestion de parc, sites institutionnels, plateforme d’intégration continue, reporting décisionnel). Il a ensuite rejoint la DSI Inria à Sophia Antipolis en 2017 pour réaliser la mise en oeuvre de la solution de gestion de parc centralisée d’Inria. Un des défis du projet a été d’assurer le déploiement et l’évolution régulière de l’outil dans son écosystème, tout en testant l’absence de regression de manière continue. --- Denis Buffenoir est Chef de projet à la DSI d’Inria. Ses derniers projets menés s’orientent vers des briques d'un SI Patrimonial unifié. Les différents projets qu’il a menés dans ce cadre sont: Un inventaire Physique d’Inria, gestion de parc avec GLPI, la gestion du patrimoine immobilier d’Inria avec un outil qui intègre le BIM (Building Information Modeling). La gestion de parc pourra entrainer deux autres projets en devenir: autour d'un outil de reporting décisionnel et d’un outil de test de non régression.
87 - SANTORIN : Feedback on a DevOps Infrasructure
Yann Guernalec - Cédric Leproust - Fragnol FlorentAbstract
The ISD of the Rennes Education Authority is responsible at national level for the design and management of the IS for examinations and competitive examinations. In a context of reducing resources and increasing our scope of action, we are carrying out a technological and organisational transformation that aims to increase the quality of service and reduce the downtime linked to production deployments. To reach this target, we have chosen to automate deployment in a DevOps approach. The national SANTORIN application, deployed in a Kubernetes cluster, was chosen to be the driver of this change. On 17 and 19 June 2019, this application managed the transition to digitisation of the preliminary Baccalaureate exams. 200,000 copies were scanned and made available to the markers. The next objective for the 2020 Baccalaureate is to manage 800,000 copies in half a day and 30 million copies per year. This transformation of our deployment methods means we can now deploy new versions in a few minutes, without service interruptions, and offer a "scalable" service to respond to the ramp-up automatically. In a “DevOps” spirit, “We”, a member of the SANTORIN Development team and a member of the Ops team, are going to talk to you about the deployment infrastructure we have put in place, the transformation it has made for us and the issues we have encountered.
104 - Agility in organizing an agile project: the PC-Scol experience
Michel Allemand - Ludovic BoudyAbstract
PC-Scol is a joint project by the AMUE and the Cocktail Association to overhaul the education and student life information system for all higher education and research institutions. PC-Scol is part of a collaborative working framework in co-construction with several university institutions. We will describe the operational organisation put in place to manage a project to co-construct a software solution of this magnitude in agile mode, with development teams from Higher Education and Research located in several universities across the territory. We will present the agile methods we use: the “Scrum” method for each team and the “Safe” method for agility at scale, to coordinate the work of the teams. The software factory and technical architecture used will also be detailed to illustrate the interdependence between the methods, tools and organisation. We will also discuss the difficulties we have encountered since the start of the project in 2017, the support we have proposed and the organisational changes that have been made, to show that it is also necessary to be agile in organising and steering an agile project. The prospects for co-administration of the Pégase solution developed by the PC-Scol project will conclude our presentation.
Biographie de l'auteur
Michel Allemand : Titulaire d’un doctorat de l’Université de Provence et un d’un PostDoc à l’IRISA, Michel Allemand a débuté sa carrière comme enseignant chercheur en informatique à l’Université de Nantes en 1997. Après avoir créé la DSI de l’Université de Nantes en tant que conseiller Système d’Information il en a pris la direction en 2008. Michel Allemand a ensuite évolué vers des missions nationales comme expert à la MIPNES et suivi des cadres de cohérences. Depuis 2017 Michel Allemand a pris la direction du projet PC-Scol, commun à l’AMUE et à l’Association Cocktail, de construction du futur SI scolarité des établissements de l’ESR. Ludovic Boudy : 18 ans d’expérience dans le développement de solutions informatiques. Développeur, chef de projet, puis directeur de projet au sein du groupe Figaro, Ludovic Boudy est depuis 2017 directeur opérationnel du projet PC-SCOL. Depuis 8 ans, il pilote et accompagne des équipes agiles dans le développement d’application de gestion et de sites web.
140 - Automated Application Deployment: Towards a new paradigm where the customer becomes an actor
Frédéric Colau - David Rideau - Didier Mathian - Fabien Belcayre - Delphine SalléBiographie de l'auteur
====== Frédéric COLAU : ====== Tombé dans la potion magique, euh ... dans l’informatique quand il était tout petit, Frédéric en a fait sa passion qui le suit tant au travail qu’à la maison. Après des études en télécommunications et réseaux puis en administration des systèmes d’informations et divers petits boulots, il a atterri au SIMSU il y a 13 ans sur un poste d’administrateur système et base de données. Rapidement orienté vers la virtualisation pour multiplier les machines, c’est tout naturellement qu’il s’est ensuite tourné vers l’automatisation et l’approche devops afin d’orchestrer ce ballet incessant de déploiement d’application. ====== David RIDEAU : ====== David n'est pas véritablement informaticien de naissance, car dans les processus d'automatisation, il cherche davantage à satisfaire son désir contemplatif en regardant les machines travailler toutes seules. Ce qui le questionne plus profondément, c'est la façon d'inciter les acteurs d'un projet à dépasser toutes les formes de barrières individuelles, hiérarchiques et fonctionnelles pour aller vers la grande idée d'une oeuvre collective. Sa conviction est que l'intégration "humaine" devrait toujours être au premier plan, car c'est elle qui est génératrice de sens et de valeur. Et que la véritable clé pour atteindre des objectifs réside dans ce paradoxe de ne jamais les mettre en avant. ====== Delphine SALLE : ====== Delphine est ingénieure en informatique de gestion. A l'université depuis 1987, elle a travaillé dans différents domaines d'activité mais depuis 1991, c'est principalement autour de la scolarité. Toutes ses réalisations sont interU. Ce qu'elle préfère, c'est manipuler des données : elle était responsable de la reprise des données scolarité lors de la fusion de l'université de Grenoble. Longtemps responsable qualité dans son service, sa priorité c'est l'efficacité et la fiabilité. Elle déteste intervenir en pompier. Formée à la méthode Agile, elle anticipe, œuvre avec patience et constance pour atteindre ses objectifs en ayant à cœur d'assurer les impératifs d'exploitation. ====== Fabien BELCAYRE : ====== Après un Dut informatique ou il a pu découvrir en partie les métiers d'aministrateurs Système et réseau, ainsi que développeur et chef de projet, Fabien a terminé ses études par une licence professionnelle spécialisé en sécurité des systèmes d'information. Il est passé par deux emplois dans le privé, et depuis 8 ans maintenant il s'emploie à partager ses compétences et savoir faire au sein du SIMSU ou sa mission principale est l'administration de système d'exploitation et la création de scripts et routines qui visent à simplifier et fiabiliser l'exploitation du parc de serveur au quotidien. ====== Didier MATHIAN : ====== Didier est au service du SIMSU depuis 1999. Princpalement interessé par le système et le stockage, il oeuvre pour avoir des architectures homogènes, simples à maintenir, fiables et performantes. Il est aussi rassuré lorsque ses systèmes similaires sont installés strictement de la même manière. Il aime retrouver ses petits alias, ses petits outils, sur tous les systèmes où il intervient.
50 - Shibboleth IdP vs Apereo CAS, « the best of all possible worlds »
Ludovic Auxepaules - Anass ChabliBiographie de l'auteur
Au sein du pôle Sécurité des Systèmes d’Information de RENATER, Ludovic Auxepaules et Anass Chabli font partie l'équipe Fédération qui opère la Fédération Éducation-Recherche.
57 - MyAcademicID : for a unique European Student eID for higher education
Hervé BourgaultAbstract
MyAcademicID focuses on developing a European Student eID scheme for higher education. This will allow students to identify and register themselves electronically at higher education institutions when going abroad on exchange and to access different student services in Europe. The digital infrastructure supporting the European Student eID for Higher Education will be the result of the integration of eduGAIN and the European Student Identifier and the establishment of digital bridges between them and the eIDAS interoperability framework being rolled out by the European institutions. Moreover, the project seeks to integrate the European student eID into four e-services: the Online Learning Agreement, the Erasmus+ Dashboard, the Erasmus+ Mobile App and the PhD Hub Platform. Additionally, the Portuguese national student ID (Estudante ID) will be made interoperable with the European Student eID, showcasing how national identity providers can join this digital scheme. Future integration with Erasmus Without Paper is also foreseen. The scalability of the project and the potential for integration of the European Student eID with a myriad of other student services (both online and offline), not only pave the way for seamless student mobility and a stronger, reinforced European student status throughout Europe, but make MyAcademicID a key component of the European Student Card Initiative spearheaded by the European Commission.
59 - Federative Identity and Access Management : a Campus perspective
Gautier Auburtin - Johann HollandAbstract
Identity management has been a subject of growing concern in higher education institutions for several years. For a long time, user account management was limited to authentication requirements. It was then extended to the issue of permissions and finally extended to identity federations. At the same time, identity management was consolidated around business processes and repositories, then on the basis of common repositories. Today, as the cornerstone of an information system, ensuring its security, homogeneity and fluidity, identity management is still however determined by institutional logic and boundaries, with the exception of identity federations. The Condorcet Campus, a new SHS research campus, comprises around 100 research units from 11 institutions. The borders within the system are blurred, as the daily residents and more widely the users have cross-departmental affiliations. In this context, the implementation of a common identity repository for the entire Campus has encountered several difficulties: identifying the source; determining precise affiliations; ensuring entry and exit from the repository; setting up permissions; ensuring access to Campus services; and securing authentication. The Condorcet Campus identity repository seeks to provide innovative solutions to these challenges. It is based on federative processes and technologies (identity managers, identity federation), allowing users to choose their source identity when registering. It also mobilises contacts for invitations and validations (entries), in this way providing a repository of hosted structures and permissions that allow access to Campus services through an authentication delegation mechanism for the original directory.
Biographie de l'auteur
Johann Holland est ingénieur spécialisé en stratégies d’innovation et en politiques numériques. Engagé sur le projet de Campus Condorcet depuis 8 ans, il en dirige le pôle numérique. Sa mission actuelle lui permet de mettre à profit et de développer, dans un contexte de gouvernance partagée et de mutualisation de ressources, les compétences qu’il a pu acquérir au préalable en participant à différents projets ou missions de conseil. Son expérience auprès des institutions publiques européennes (Europeana à travers les projets CASPAR et Athena) ou nationales (le CNRS, l’INA, l’IRI du Centre Georges Pompidou), ainsi qu’au service d’entreprises privées (Orange, EDF R&D) l’ont rendu familier des problématiques d’usages et d’accompagnement du changement, d’innovation permanente, d’ingénierie numérique et de transfert de technologies. Documentaliste de formation, Gautier Auburtin est aujourd'hui responsable des systèmes d'information au Campus Condorcet où il travaille depuis 4 années à la construction des systèmes d'information et des référentiels SI. Son expérience des systèmes d'information web et documentaires (CNRS, EPHE, INHA) s'est étendue à la gestion des identités et des annuaires, aux besoins d'authentification et aux offres de services en ligne.
90 - Directories for Higher Education and Research – SupAnn News
Aï-eng Bompoil - Benoit Branciard - Pierre-olivier Terrisse - Sylvain BrachotteAbstract
90 : Directories for Higher Education and Research in France – SupAnn News Implementation of the SupAnn recommendations (Directories for Higher Education in France) within the different higher education and research organisations is now essential. The ever-increasing range of digital services and applications has made it necessary to establish a consistent framework for the exploitation and exchange of directory-type data, both via identity federation mechanisms and for local uses. The aim of the SupAnn recommendations is to define a common technical framework and vocabulary that meets these needs. Since SupAnn’s first publication in 2003 and its 2008 and 2009 updates, developments in digital services in institutions have led to new needs in terms of structuring, exchange and data control. The generalisation of repositories, the interconnection with FranceConnect and the sharing of applications requires further SupAnn development. To this end, in October 2016 the RENATER public interest grouping relaunched a working group bringing together representatives from Higher Education and Research establishments. Their work led to the publication in September 2018 of a new version of SupAnn, enhancing the previous one in three areas: 1. Interoperability; 2. Representing new information; 3. Wording and formalities. The poster proposed for JRES 2019 presents, through the activities of the working group, the current version of SupAnn, its uses, the current and future work. This discussion will make it possible to find out about SupAnn, take an interest in it, comment on it, and perhaps invest in its future developments.
123 - MFA and 2FA on Shibboleth IdP, Apereo CAS servers and Federations
Ludovic Auxepaules - Guillaume RousseAbstract
With the rise of phishing and account hacking, SFA (Single Factor) authentication, generally based on a “username/password” pair, has reached its limits. More and more authentication systems offer enhanced security using several authentication factors, known as strong authentication (or MFA for Multi-Factor Authentication). It is recognised to improve security when it combines at least two distinct authentication factors (2FA) of these three common categories: * a memory factor "what you know" (a PIN code, etc.), * a material factor "what you have" (a U2F key, etc.), * a body factor "what you are" (biometric authentication). Security can also be improved by adding other factors: reaction, location or time... In this presentation, we discuss strong authentication in the context of the two main authentication systems used in the Education-Research federation: Shibboleth’s Identity Provider (IdP) and Apereo’s CAS server. Firstly, we describe the choices, use cases and levels of trust (or LoA Level of Assurance) that justify the implementation of enhanced authentication. We also specify the constraints and difficulties related to strengthening authentication in the context of identity federations. Next, we describe different possible MFA implementations in the Shibboleth IdP and CAS server, as well as the new SFA and MFA profiles published by REFEDS (the Research and Education FEDerations group). Finally, we conclude with the advances introduced by the W3C WebAuthn (Web Authentication) in 2019.
Biographie de l'auteur
Ludovic Auxepaules et Guillaume Rousse ont tous les deux découvert la Fédération Éducation-Recherche comme administrateurs de services d'authentification, au sein de leurs établissements d'origine (Sorbonne Université pour l'un, l'INRIA pour l'autre). Ils travaillent aujourd'hui comme opérateurs de cette même fédération, au sein du pôle Sécurité des Systèmes d’Information de RENATER.
126 - Quarantine and remediation : communicate more, block less
Sébastien Beaudlot - Jade Tavernier - Maxime CharpenneAbstract
There are varied extraordinary reasons for interacting with our users: validity of their credentials, agreement on the Acceptable Use Policy, accounts blocked for sending spam... Because of the severity of these events or the necessity to interact with users, the associated procedures are often manual and the digital identities are sometimes affected (blocking or suspension). The officers dealing with digital identities and those in contact with their owners are different persons. Communication is therefore laborious and the consequences may be very harmful for users. As the authentication process is mandatory for everyone, it is the perfect time to display information and provide a way to unlock digital identities by their owners. By combining quarantine and remediation procedures, we are trying to lower difficulties every users may encounter thanks to better communication: * The quarantine focus attention when credentials are entered; * The information-remediation application makes possible to monitor blockages and solve them. We have built an application and an API around the widespread and proven services of CAS and LDAP to provide these new tools: quarantine and autonomous remediation.
Biographie de l'auteur
Sébastien Beaudlot est administrateur système, réseaux et téléphonie au Pôle DevOps d'Avignon Université. Jade Tavernier est développeuse au sein du Pôle DevOps à Avignon Université depuis 2014. Maxime Charpenne est en poste à Avignon Université depuis 2008. Après avoir été administrateur système pendant 9 ans, il encadre depuis 2017 le Pôle DevOps.
151 - One Hub to process them all
Sophie Schaal - Long Ya - Sébastien Simenel - Olivier AdamAbstract
Since 2007, in the Rennes Regional education authority, we have provided a "Toutatice" digital work space (VLE) for all those involved in primary and secondary education in Brittany, i.e. around 1.5 million users. How can we easily identify staff, students and their legal representatives for access to the digital services offered by the Ministry, Regional education authority, local authorities and institutions, taking into account the diversity of authentication protocols and application APIs? In 2015, we presented a solution that consolidates SAML2 and CAS applications behind a single service provider with a CASSHIB module enabling the "shibbolisation" of a CAS server. Since 2017, as part of the IPANEMA PIA project, we have been upgrading this solution to take into account the OIDC - OpenID Connect protocol, user consent and increase its availability. The solution to date, built on Shibboleth version 3, is interoperable with all academic and national identity providers. It is efficient, highly available, monitored, and compatible with current standard protocols: CAS, SAML2 and OpenId Connect. It can connect VLE Toutatice to EduConnect, a new National Education identity provider, as well as to the GAR, the digital resources access service led by the Ministry. We will present the results of our work to you, detailing the architecture, implementation and pathway that led to the implementation of this genuine HUB of multi-protocol identities.
Biographie de l'auteur
Sophie Schaal, ingénieur informatique, a intégré le domaine de l’éducation depuis maintenant 9 ans. Elle travaille ces dernières années en tant que chef de projet transverse au sein de la Direction des Systèmes d’Information et de l’Innovation de l’académie de Rennes, la DSII. Depuis 3 ans, elle coordonne les travaux rennais du projet national d’expérimentations « Identité numérique et relation usager » IPANEMA. C’est dans ce cadre qu’elle collabore sur le projet de HUB identité territorial avec : * Olivier Adam, RSSI et directeur technique de la DSII. Porteur du projet IPANEMA et fournisseur perpétuel du backlog projet. En gros c’est le patron ! * Sébastien Simenel, ingénieur système spécialisé dans l’intégration production et super héros de l’équipe. Éteint un feu en production comme personne. * Long Ya, ingénieur en développement ayant basculé du côté obscur du devops suite à son intégration dans l’équipe. Shibboleth n’a plus de secret pour lui !
152 - Itinerary of a digital identity management system within a research institute
Antoine Gallavardin - Christophe Monrocq - Guillaume PerréalAbstract
Merger, migration, outsourcing, name change, etc. These are all stages in the life of a Public Scientific and Technical Research Establishment (EPST) and therefore have an impact on the information system. Identity and account management systems are obviously involved, as they manage all the information about internal and external users, to provide them with an effective service (messaging, access to applications). In 10 years, Irstea has changed name, integrated the RENATER federation, migrated from ActiveDirectory 2003 to ActiveDirectory 2012, outsourced its messaging system to Partage, set up a Single Sign On system and integrated the SINAPS system. And at the end of the year, Irstea is about to merge with INRA to become INRAE. After a presentation of the Irstea context, we will go over the main stages of this development, with technical feedback for each, ranging from the abandonment of one or other of the solutions, to the adaptation of existing modules or even internal or external development. We will then provide methodological and strategic feedback, grouped into four areas: * interoperability between the various components of the information system; * significant use of the resources proposed by the Higher Education and Research community (Partage for messaging, SINAPS for managing identities, Supann for structuring data); * the use of service provider networks, business and community networks; * the "Cathedral and Bazaar" principle. We will finish by presenting the solution put in place and its developments within the framework of the future INRAE institute.
Biographie de l'auteur
Arrivé en 2006 à IRSTEA comme technicien support, Antoine Gallavardin est maintenant responsable informatique du centre Irstea de Lyon Villeurbanne et adjoint du responsable des moyens communs informatiques de l'IRSTEA. Après avoir fait évoluer le système d'information de son centre (170 personnes), il participe à l'évolution de celui de l'institut (1500 personnes) sur les aspects "annuaire", "authentification", "messagerie" et "services Web". Parmi ses réalisations, on trouve la chaîne d'approvisionnement de compte depuis le système d'information RH, la mise en place d'un SSO ainsi que la création d'un "drive" institutionnel. Il a travaillé sur la gestion des identités avec Christophe Monrocq, responsable des outils de gestion de l'Irstea, et Guillaume Perréal, "lead developer" d'une unité de développement. Adepte des logiciels libres pour leur qualité et évolutivité, il contribue par du code, du rapport de bug et parfois par de la stratégie. Cette préférence ne met pas à l'écart les autres logiciels, et il s'efforce d'assurer leur interopérabilité en étant le moins impactant possible sur les aspects techniques et humains.
157 - Anonymisation of National Education digital identities
Marc Berhaut - Sophie Schaal - Olivier Adam - Bertrand BlaessingerAbstract
The GDPR came into force on 25 May 2018. It requires organisations to implement “appropriate technical and organisational measures” to be able to demonstrate their compliance. It is therefore more difficult to have representative data from our users on the testing, qualification and development platforms of our infrastructures, without the necessary security measures. However, we must be able to carry out testing and development activities. To meet this challenge, the Rennes Education Authority has implemented a data set production solution that is representative of the educational community and its structures. These data sets are used to qualify the services offered by the Ministry of National Education, the Rennes Education Authority and the local authorities: EduConnect, regional VLE and the GAR. Currently, the Rennes Education Authority is identified by the Ministry of National Education as the service provider for the production of anonymised data sets and is regularly called upon for many projects managed by internal and external, national and local stakeholders. We will introduce you to the solution we have developed - light and based solely on open-source technologies, it implements a process for anonymising actual data from our information systems. We will show how it specifically meets the requirements of volume, consistency and reproducibility. It also offers great adaptability to the IS source type and the format and grammar of the data.
Biographie de l'auteur
Marc Berhaut, ingénieur informatique, a intégré le domaine de l’éducation depuis maintenant 9 ans. Il travaille ces dernières années en tant qu’architecte applicatif au sein de la Direction des Systèmes d’Information et de l’Innovation de l’académie de Rennes, la DSII. Il intègre il y a 2 ans l’équipe du projet national d’expérimentations « Identité numérique et relation usager » IPANEMA. C’est dans ce cadre qu’il pilote le projet d’anonymisation des identités de l’éducation nationale sur lequel il collabore avec : * Olivier Adam, RSSI et Directeur Technique de la DSII. Porteur du projet IPANEMA et fournisseur perpétuel du backlog projet. En gros c’est le patron ! * Bertrand Blaessinger, administrateur identités, grand maitre de l’annuaire applicatif de l’ENT académique * Sophie Schaal, coordinatrice projets sur tous les fronts, en cours de clonage
158 - Millions of users with Shibboleth: how does it work?
Nicolas Romero - Pierre Sagne - Vincent LeblancBiographie de l'auteur
Nicolas Romero est responsable du Pôle national de compétences Identités et Accès du Ministère de l'Education Nationale et de la Jeunesse. Cette équipe porte l'expertise sur la gestion des identités et des accès, les authentifications, et la fédération d'identité pour le ministère et les académies. Nicolas travaille sur ces sujets depuis une dizaine d'années. Pierre Sagne est membre du Pôle national de compétences Identités et Accès du Ministère de l'Education Nationale et de la Jeunesse. Il fait partie de cette équipe depuis le premier jour puisqu'il en est historiquement le premier membre. Expert technique, il est notamment spécialisé sur la fédération d'identité. Vincent Leblanc est également membre du Pôle national de compétences Identités et Accès du Ministère de l'Education Nationale et de la Jeunesse. Il a longtemps travaillé sur les tests de montée en charge auparavant, et apporte depuis quelques années ses compétences à l'équipe.
29 - Data centre: the song of the CoC
Romaric David - Gabrielle Feltin - Paolo BertoldiAbstract
In a context of global warming and eco-responsibility, there has been an explosion in the electricity consumption of data centres. The cause: Big Data, AI, cloud computing, the densification of IT infrastructures, etc. Is it irreparable? What can I do in my Data centre? There is a great tool for this, straight from the European Commission: the Code of Conduct on the energy efficiency of data centres (CoC, in short). What on earth does this austere name mean? The Code of Conduct refers to 200 very tangible good practices, which have been developed since 2008 and concern the design and operation of data centres. We would like to familiarise you with the logic and good practices, to give you an initial guide. With this poster, Ecoinfo and the sponsor of the Code of Conduct at the European Commission would like to encourage you to join the club of Code of Conduct participants. Our community is beginning to take an interest. You will benefit from feedback on the approach of the University of Grenoble (the first in France), the University of Bourgogne Franche-Comté, and the University of Strasbourg. You too can become an informed stakeholder and improve the energy efficiency of your Data centres! Ecoinfo can also help you with your Data centre: audit approach, support with the appropriation and implementation of the Code of Conduct. You can find out more by coming to talk with us about the poster! In conclusion, we will provide you with all the keys to the Code of Conduct so that you too can join in.
31 - SUMiT Académies - iTop national service offer
Claude Saive - Anny LavanantAbstract
SUMiT (SUite de Management iT) is an ambitious project in its community-based approach of co-construction and national harmonisation of ITSM (Information Technology Service Management) tools within the Ministry of National Education and Youth. Based on the iTop version from the French publisher COMBODO, the Department of Digital Services in Education (DNE) is committed alongside the Regional education authorities to guaranteeing the consistency of a national ITSM service offer, bringing together all the achievements and expertise. SUMiT Académies is: - A packaged offer covering the management of incident, request, problem, configuration and catalogue processes. - The ability to remain "master" of iTop bodies, in terms of configuration, customisation and reporting. - The complete integration of iTop with DigDash, Business Intelligence tool. - The creative dynamic of the community approach to developing the offer. - A methodology and tools for change management and solution deployment. In the presentation we will discuss more specifically: - DigDash integrated reporting and drill down dashboards. - The CMDB common data model (Configuration Management DataBase) and the data collectors. - The generic interface enabling SUMiT Académies to connect to the other ITSM tools on the market. The DNE wants to develop the iTop solution by adding features without touching the core of the product and donating its contributions to the community (CMDB collectors, knowledge base).
Biographie de l'auteur
Anny Lavanant travaille au ministère de l’éducation nationale et de la Jeunesse, au sein de la Direction du Numérique pour l’Education et, plus particulièrement, du Centre National de service. Son parcours professionnel l’a amenée à travailler tant en académies (chef de projet système d’information, DSI adjointe) que dans des équipes nationales de développement et diffusion des applications examen et concours ainsi qu’au centre de responsabilité technique supervision. Sa connaissance approfondie des contextes académiques et nationaux ainsi que du système d’information du ministère de l’éducation nationale et de la jeunesse l’ont amenée à assurer les fonctions de chef de projet national gouvernance et exploitation des services IT. Depuis septembre 2017, elle pilote le projet SUMiT (SUite de Management iT) qui est un projet ambitieux dans sa démarche communautaire de co-construction et d’harmonisation à l’échelle nationale des outils ITSM (Information Technology Service Management) au sein du ministère de l’éducation nationale et de la jeunesse. C’est à ce titre qu’elle va vous présenter l’offre SUMiT proposée aux académies par la DNE. ici.
38 - UX Design and Design Thinking at the French National Education - User centered design, yes but how ?
Jerome Le Tanou - Marine GoutAbstract
This presentation contains feedback on the implementation of a process to improve the user experience (UX) of working tools within the National Education system. We will report on our experience and learning linked to the implementation of this approach on the digital tools of a multitude of employees with very varied profiles (1.2 million employees for over 100 different roles). The decision to take an interest in the user experience is a founding hypothesis of our initiative. It is particularly unique due to our central position in the organisation, which in fact takes us away from the users we seek to help. To achieve this, we implemented a Design Thinking approach that made it possible to identify with users the difficulties to be resolved in the context of overhauling the messaging and collaboration environments, before offering them prototypes of suitable solutions. As part of this project, we relied on the Design Thinking approach developed by Stanford University: * Empathize * Define * Ideate * Prototype * Test Alongside this approach, we have undertaken to build an open innovation platform that can co-produce UX improvements for free and open source products. After a year of implementing the Design Thinking andUX Design approach in which staff were fully involved in a spirit of co-construction on two projects with distinct and complementary approaches, we now wish to share our feedback on this innovative effort.
Biographie de l'auteur
====== Marine GOUT ====== Je suis chef de projet pour la direction du numérique de l’éducation nationale. Après des études en informatique, multimédia puis sciences cognitives dans les années 2000, j’ai enseigné pendant 5 ans en établissement scolaire. J’ai ensuite entrepris une thèse en sciences humaines et sociales que j’ai soutenue en 2015 à l'université Toulouse 3 Paul-Sabatier. Pendant ce parcours doctoral, j'ai eu la chance de m'impliquer dans la vie institutionnelle de mon université et de découvrir le monde des politiques publiques du numérique. C'est ce qui m'a amenée à rejoindre la DINSIC après ma thèse, pour travailler sur la qualité des démarches en ligne pendant deux ans. Avant de retourner dans l’éducation nationale, je me suis intéressée aux pratiques de protection de la vie privée des utilisateurs de smartphones, et à l’utilisabilité des dispositifs de protection de la vie privée dans le cadre d’un post-doctorat à l'université de Rennes 2. Tous ces éléments mis bout à bout m’ont amenée à m’intéresser de très près au design, qui est pour moi un assemblage nouveau de savoirs et pratiques familiers. ====== Jérôme LE TANOU ====== Passionné par l'informatique depuis 1981, date à laquelle j'ai reçu de façon complétement improbable un ZX81, cette passion ne m'a jamais quitté et a orienté ma formation initiale vers un cursus d'ingénieur en informatique option Réseaux. Lors de mon entrée dans le monde du travail, juste après mon diplôme obtenu en 1997, c'était l'explosion de l'Ethernet et de l'IP. J'ai donc eu la chance de contribuer aux déploiements d’infrastructures conséquentes (à destination de plusieurs dizaines de milliers d'utilisateurs) avec des technologies émergentes qui sont aujourd'hui des standards indiscutables. J'ai longtemps œuvré dans le domaine des infrastructures (Réseaux/Télécoms/Systèmes/DC/...), que ça soit dans l’enseignement supérieur (Université de Grenoble), dans la recherche (CNRS) ou à l’Éducation nationale (Académie de Grenoble). Mon travail a toujours été guidé par un seul objectif : rendre le service le plus satisfaisant et le plus accessible à l'utilisateur. Ça a toujours été un jeu intellectuel plaisant de triturer des technologiques parfois complexes pour rendre un service de façon la plus transparente et simple possible à l'utilisateur. C'est toujours dans cet état d'esprit, qu'en 2018, j'ai rejoint la Direction du Numérique (DNE) au Ministère du l’Éducation nationale en tant que chef de projet pour participer à la définition de l’Environnement de Travail Numérique Agent (ETNA) à destination de l’ensemble des 1.2 millions personnels de l’Éducation nationale et qui a conduit à la démarche que l'on vous présente avec Marine.
114 - The environmental problems of innovation and sharing
Marc ChantreuxAbstract
Firstly, I will ask the questions that I think will identify the various aspects of the problem: * What are the ecological impacts of the production of IT hardware and software? * What are the trends and orders of magnitude? * What culture have we developed around this production? * How is this culture intimately linked to the problems we face and the solutions that will eventually have to be accepted? * What resources should ideally be mobilised to deal with the emergence and adoption of these solutions? Secondly, I will refer to the multiple stakeholders and the messages conveyed around the ecology of digital technology. I will try to identify a common, factual and usable core to build a strategy. I will then look at the strategy itself by developing several areas of focus: * our immediate responsibility both as a producer and provider of services and also architectures * my conviction that the Higher Education and Research community, if the politicians give us the means, is more capable than any other of designing, developing and promoting the innovations necessary for "digital transition" that respects the specifications set for us by the ecological realities. Finally, I will explain the need to make our managers aware of the necessary cultural changes that will first have to be made within our departments.
Biographie de l'auteur
libriste depuis circa 1996 (parce que je crois que la collaboration sera toujours plus productive et vertueuse que la collaboration et la compétition), membre de la communauté universitaire depuis 2002 (par envie de contribuer à la recherche), végétarien anti-spéciste depuis circa 2007 (parceque je crois que tout être capable des mêmes émotions et sensations que moi a droit aux mêmes égards), je me tiens loin des sentiers battus. Co-auteur d'articles JRES 2005 et 2017, co-organisateur et speaker de conférences (the perl conference europe, fosdem, pycon.fr, osdc.fr, journées perl, ...) et hackathons, contributeur occasionnel à de nombreux logiciels libres, fondateur du lug de Casablanca en 1997 (le premier au Maroc ?) IGE à la direction du numérique de l'unistra depuis 2012, Floss evengelist à Renater depuis 2018.
155 - Establishment of a new social and collaborative working environment: Users, center of attention
Simon Piquard - Stéphane SallesAbstract
All higher education and research institutions now have a VLE. The aim of the University of Strasbourg (Unistra) was to integrate a social concept into this tool. In line with a ministerial desire to promote support for student success, as well as a current climate of innovation and digital revolution requiring the creation of the digital workplace, Unistra has conducted consultations and tool studies with all its users. It is very keen to reposition them at the heart of its digital services offering. These studies led to the choice of a tool: Ernest The implementation of this new digital and social work environment, in which information is profiled, and which refines this customisation through various mechanisms, provides a framework conducive to success. In fact, by becoming an active participant in the environment and no longer just a consumer, the learner takes ownership of the tools available to them and develops their network, which are major factors in university success. These challenges exist for staff, teachers and researchers, who have asked for tools to streamline collaborative work and communication. Strong support for Ernest is a key success factor in its implementation. We have therefore decided to focus on change management, based on three principles: Involve - Communicate - Train. From gathering needs with the users, through to implementation, we will show how we have decided to put the user at the centre of our approach throughout the project.
Biographie de l'auteur
===== Simon Piquard ===== "Si j'étais né il y a 2000 ans, je serais né à Rome" Il l'énonce et tout s'éclaire, nous comprenons immédiatement que Simon est originaire de Nancy en Meurthe-et-Moselle, ville lumière, centre du monde moderne. Côté étude, c'est pas Byzance comme dirait l'autre. Il obtient à la surprise générale un baccalauréat ES, puis, en 2005, malgré le scepticisme ambiant, un BTS Action Commerciale en contrat de qualification. Tour à tour manager dans un centre de profit, responsable d'un service traiteur, attaché-commercial, il erre ensuite dans des jobs qui ne l'intéressent guère, déménage 3 fois en 3 ans pour au final s'installer dans la charmante et pétillante capitale alsacienne, Strasbourg. Peu de temps après son arrivée il intègre l'Université de Strasbourg naissante (2009), en tant que coordinateur pour le déploiement de la nouvelle carte étudiante et professionnelle "Mon Pass Campus Alsace". Il est content, les chefs aussi, il reste. 10 années plus tard, il a accumulé quelques casseroles, et surtout beaucoup d'expériences en tant que chef de projet MOA (Pass Campus Alsace, Copieurs MFP en libre service, PGI scolarité Alisée, box/cloud unistra Seafile, Partage unistra, Contrôle d'accès Physique dans les bâtiments), correspondant communication, formateur et responsable conduite du changement (Windows 10, E.N.S.T Ernest). Il vient à vous aujourd'hui humblement, pour vous parler conduite du changement, usages & approche utilisateurs sur les récents projets Ernest ("Mise en place d'un nouvel environnement de travail social et collaboratif : Une approche centrée utilisateur"), Partage ("Migration de la messagerie de l'Université de Strasbourg vers une solution mutualisée d'outils collaboratifs") et outils de stockage (poster "Quelle solution de stockage choisir dans la prochaine décennie ?"). Avé
160 - A ticket managment system linked to a CMDB
Matthieu Fuchs - Alexandre CombeauAbstract
Linking a ticket system to a CMDB, a winning combination? A CMDB (configuration management database) is a tool used to display the various configuration elements of the Information System, as well as the links between them. To replace our ticket management system, launched in 2009, the Unistra Digital Department wanted to combine three ITIL processes, CMDB (iTop), and ticket management and link them through the service catalogue. Originally, there were few links between the ticket system and our CMDB. Our flow of 25,000 annual tickets and our numerous configuration elements were also completely separate. The closer coupling between these two systems must enable better ticket analysis using iTop, with better control of IS elements thanks to tickets. The significant involvement of the project owner meant we could get as close as possible to expectations in terms of ticket management. In addition, the implementation of a user portal means that the entire university community (around 90,000 users) can create more varied requests, tailored to resolution needs, enabling us to improve our quality of service. During the presentation, we will present the work done to adapt iTop via community modules and by creating specific modules. We will also discuss feedback from the teams in charge of processing tickets as well as users via the portal.
Biographie de l'auteur
Matthieu Fuchs, 24 ans, Titulaire d'un master Ingénierie Logiciels et Connaissances en septembre 2019. 3 ans en alternance sur le projet de la mise en place d'une CMDB : iTop à la DNum. Développeur au département Développement, Intégration et Paramétrage au sein de la direction du numérique (DNum) de l'Unistra.
165 - CMDB: Advantages and Methodology
Cédric Lelu - François Alberici - Delphine MichautAbstract
In 2014, the IT department of the Besançon Education Authority equipped its reorganisation using the iTop service centre. This tool professionalises ISD operation by modelling all internal business processes. Here we discuss its pillar, the configuration management database (CMDB), both in the way we have implemented it and in all its use cases within our IS. The design of the CMDB has led to standardisation choices and the formalisation of the ISD's organisation. The roles have been clarified and the CIs, components of the Information System, have been defined: technical building blocks of an IT infrastructure, applications, users, etc. With our standard model we can make the connection between the user application view and the system view, thereby providing an overall vision of our IS to everyone involved in the ISD. The CMDB is fed from the IS reference data sources. It is used in all ISD business processes via a local application. In this article, we show how it was interfaced with the system engineer's conventional tools and how, through simple detection of new CIs, the information elements in the interfaced tools were propagated for action. The exhaustive mapping provided by the CMDB and its real-time input enables impact forecasts or the performance of precise instantaneous diagnostics (intervention, MRO, etc.). CMDB is a powerful tool for increasing production reliability and its use is a key practice for the infrastructure manager.
Biographie de l'auteur
François Alberici est ingénieur de recherche à la DSI du rectorat de Besançon depuis 2000. Adjoint au directeur des systèmes d'information, il est responsable du pôle Production et Opérations et notamment des équipes systèmes et réseaux. Son travail se focalise donc sur les aspect organisationnels. Adjoint RSSI, il est également particulièrement attentifs aux problématiques concernant la sécurité et la qualité des systèmes d'information. Cédric LELU, docteur en Sciences Pour l’Ingénieur en 2002 rejoint l’équipe Systèmes et Réseaux du Service Informatique de l’Académie de Besançon en 2003 en tant qu'ingénieur de recherche. Il s’occupe alors de la supervision, de la mise en production des applications nationales (DB2 / weblogic / apache) et des systèmes VMWARE et du stockage SAN, et développe une première CMDB. En 2014, lors de la réorganisation de la Direction des Systèmes d’Information, il continue de travailler dans son équipe, renommée Production et Opération, Infrastructures, et participe à la mise en place des différentes procédures qualités.