Dr. Tim Dörnemann

Dr. Tim Dörnemann

Marburg, Hessen, Deutschland
611 Follower:innen 500+ Kontakte

Info

I’ve worked in several roles so far: research and teaching assistant during my PhD in…

Berufserfahrung

  • INFOMOTION GmbH Grafik

    INFOMOTION GmbH

    Frankfurt, Hesse, Germany

  • -

    Marburg-Biedenkopf, Hesse, Germany

  • -

  • -

  • -

    Allendorf/Eder

  • -

    Marburg

  • -

    Philipps University of Marburg

  • -

    St. Gallen

Ausbildung

  • Philipps-Universität Marburg

    Thesis: Supporting Quality of Service in Scientific Workflows

    Book version:
    http://www.amazon.de/gp/aw/d/3838132343/ref=redir_mdp_mobile

  • Schwerpunkte:
    Informatik: Verteilte Systeme, Datenbanksysteme
    BWL: Marketing

Bescheinigungen und Zertifikate

Ehrenamt

  • Web Administrator & IT Support

    Marburg Waldkindergarten

    –Heute 11 Jahre 8 Monate

    Web Page Administration
    Support of employees with PC problems

Veröffentlichungen

  • Supporting Quality-of-Service in Scientific Workflows

    Dissertation, University of Marburg, Germany

    In this thesis, the use of the industry standard BPEL, a workflow language for modeling business processes, is proposed for the modeling and the execution of scientific workflows. This work presents components that extend an existing implementation of the BPEL standard and eliminate the identified weaknesses. The particular focus is on so-called non-functional (Quality-of-Service) requirements. These requirements include scalability, reliability (fault tolerance), data security, and cost (of…

    In this thesis, the use of the industry standard BPEL, a workflow language for modeling business processes, is proposed for the modeling and the execution of scientific workflows. This work presents components that extend an existing implementation of the BPEL standard and eliminate the identified weaknesses. The particular focus is on so-called non-functional (Quality-of-Service) requirements. These requirements include scalability, reliability (fault tolerance), data security, and cost (of executing a workflow). The major components cover exactly these requirements:
    (1) Scalability of the workflow system is achieved by automatically adding additional (Cloud) resources to the workflow system’s resource pool when the workflow system is heavily loaded.
    (2) High reliability is achieved via continuous monitoring of workflow execution and corrective interventions, such as re-execution of a failed workflow step or replacement of the faulty resource.
    (3) The majority of scientific workflow systems only take the performance and utilization of resources for the execution of workflow steps into account when making scheduling decisions. The presented workflow system goes beyond that. By defining preference values for the weighting of costs and the anticipated workflow execution time, workflow users may influence the resource selection process. The developed multi-objective scheduling algorithm respects the defined weighting and makes both efficient and advantageous decisions using a heuristic approach.
    (4) Because it supports various encryption, signature and authentication mechanisms (e.g., Grid Security Infrastructure), the workflow system guarantees data security in the transfer of workflow data. Furthermore, this dissertation presents two modeling tools that support users with different needs.

    Veröffentlichung anzeigen
  • Multi-Objective Scheduling of BPEL Workflows in Geographically Distributed Clouds

    IEEE

    In this paper, a novel scheduling algorithm for Cloud-based workflow applications is presented. If the constituent workflow tasks are geographically distributed - hosted by different Cloud providers or data centers of the same provider - data transmission can be the main bottleneck. The algorithm therefore takes data dependencies between workflow steps into account and assigns them to Cloud resources based on the two conflicting objectives of cost and execution time according to the preferences…

    In this paper, a novel scheduling algorithm for Cloud-based workflow applications is presented. If the constituent workflow tasks are geographically distributed - hosted by different Cloud providers or data centers of the same provider - data transmission can be the main bottleneck. The algorithm therefore takes data dependencies between workflow steps into account and assigns them to Cloud resources based on the two conflicting objectives of cost and execution time according to the preferences of the user. Our implementation is based on BPEL, an industry standard for workflow modeling, and does not require any changes to the standard. It is based on, but not limited to, the Active BPEL engine and Amazon's Elastic Compute Cloud. To automatically adapt the scheduling decisions to network-related changes, the data transmission speed between the available resources is monitored continuously. Experimental results for a real-life workflow from a medical domain indicate that both the workflow execution times and the corresponding costs can be reduced significantly.

    Andere Autor:innen
    • Ernst Juhnke
    • David Böck
    • Bernd Freisleben
    Veröffentlichung anzeigen
  • Data Flow Driven Scheduling of BPEL Workflows Using Cloud Resources

    IEEE

    In this paper, an approach to assign BPEL workflow steps to available resources is presented. The approach takes data dependencies between workflow steps and the utilization of resources at runtime into account. The developed scheduling algorithm simulates whether the makespan of workflows could be reduced by providing additional resources from a Cloud infrastructure. If yes, Cloud resources are automatically set up and used to increase throughput. The proposed approach does not require any…

    In this paper, an approach to assign BPEL workflow steps to available resources is presented. The approach takes data dependencies between workflow steps and the utilization of resources at runtime into account. The developed scheduling algorithm simulates whether the makespan of workflows could be reduced by providing additional resources from a Cloud infrastructure. If yes, Cloud resources are automatically set up and used to increase throughput. The proposed approach does not require any changes to the BPEL standard. An implementation based on the ActiveBPEL engine and Amazon's Elastic Compute Cloud is presented. Experimental results for a real-life workflow from a medical application indicate that workflow execution times can be reduced significantly.

    Andere Autor:innen
    • Ernst Juhnke
    • Thomas Noll
    • Dominik Seiler
    • Bernd Freisleben
    Veröffentlichung anzeigen
  • DAVO: A Domain-Adaptable, Visual BPEL4WS Orchestrator

    IEEE

    The Business Process Execution Language for Web Services (BPEL4WS) is the de facto standard for the composition of Web services into complex, valued-added workflows in both industry and academia. Since the composition of Web services into a workflow is challenging and error-prone, several graphical BPEL4WS workflow editors have been developed. These tools focus on the composition process and the visualization of workflows and mainly address the needs of web service experts.To increase the…

    The Business Process Execution Language for Web Services (BPEL4WS) is the de facto standard for the composition of Web services into complex, valued-added workflows in both industry and academia. Since the composition of Web services into a workflow is challenging and error-prone, several graphical BPEL4WS workflow editors have been developed. These tools focus on the composition process and the visualization of workflows and mainly address the needs of web service experts.To increase the acceptance of BPEL4WS in new application domains, it is mandatory that non Web service experts are also empowered to easily compose Web services into a workflow. This paper presents the domain-adaptable visual orchestrator (DAVO), a graphical BPEL4WS workflow editor which offers a domain-adaptable data model and user interface. DAVO can be easily customized to domain needs and thus is suitable for non Web service experts.

    Andere Autor:innen
    • Markus Mathes
    • Ernst Juhnke
    • Roland Schwarzkopf
    • Bernd Freisleben
    Veröffentlichung anzeigen
  • On-Demand Resource Provisioning for BPEL Workflows Using Amazon's Elastic Compute Cloud

    IEEE

    BPEL is the de facto standard for business process modeling in today's enterprises and is a promising candidate for the integration of business and Grid applications. Current BPEL implementations do not provide mechanisms to schedule service calls with respect to the load of the target hosts. In this paper, a solution that automatically schedules workflow steps to underutilized hosts and provides new hosts using Cloud computing infrastructures in peak-load situations is presented. The proposed…

    BPEL is the de facto standard for business process modeling in today's enterprises and is a promising candidate for the integration of business and Grid applications. Current BPEL implementations do not provide mechanisms to schedule service calls with respect to the load of the target hosts. In this paper, a solution that automatically schedules workflow steps to underutilized hosts and provides new hosts using Cloud computing infrastructures in peak-load situations is presented. The proposed approach does not require any changes to the BPEL standard. An implementation based on the ActiveBPEL engine and Amazon's Elastic Compute Cloud is presented.

    Andere Autor:innen
    • Ernst Juhnke
    • Bernd Freisleben
    Veröffentlichung anzeigen
  • Omnivore: Integration of Grid Meta-Scheduling and Peer-to-Peer Technologies

    IEEE

    Dedicated servers remain to be a common constituent of Grid job scheduling architectures, forcing site administrators to make compromises between administrative expenses and system reliability. Apart from requiring administrative attention, dedicated servers create single points of failure and should not be subjected to network churn. This paper presents the design and implementation of Omnivore, a fully decentralized job scheduling system, built on a peer-to-peer based meta-scheduler. Omnivore…

    Dedicated servers remain to be a common constituent of Grid job scheduling architectures, forcing site administrators to make compromises between administrative expenses and system reliability. Apart from requiring administrative attention, dedicated servers create single points of failure and should not be subjected to network churn. This paper presents the design and implementation of Omnivore, a fully decentralized job scheduling system, built on a peer-to-peer based meta-scheduler. Omnivore is able to cope both with node failures and network churn, eliminating the need for central administration and continuous resource availability. It is integrated into the Grid landscape (especially the Globus Toolkit 4) by means of the GridWay meta- scheduler to provide scalable distributed scheduling, replicated storage and system monitoring capabilities. Results obtained from an experimental evaluation of our implementation show that Omnivore is both scalable and resilient in the presence of node failures and network churn.

    Andere Autor:innen
    • Michael Heidt
    • Kay Dörnemann
    • Bernd Freisleben
    Veröffentlichung anzeigen

Kurse

  • ITIL Foundations

    -

Auszeichnungen/Preise

  • IEEE Highly Commended Paper Award

    IEEE

    IEEE Highly Commended Paper Award at the IEEE 23rd International Conference on
    Advanced Information Networking and Applications (AINA-09) in Bredford, UK

    Paper Title: DAVO: A Domain-Adaptable, Visual BPEL4WS Orchestrator

  • IEEE Best Paper Award

    IEEE

    Best Paper Award at 8th IEEE Symposium on Cluster Computing and the Grid (CCGrid 2008) in Lyon, France

    Paper Title: Omnivore: Integration of Grid Meta-Scheduling and Peer-to-Peer Technologies

Sprachen

  • Englisch

    Fließend

  • Deutsch

    Muttersprache oder zweisprachig

  • Französisch

    Grundkenntnisse

Dr. Tim Dörnemanns vollständiges Profil ansehen

  • Herausfinden, welche gemeinsamen Kontakte Sie haben
  • Sich vorstellen lassen
  • Dr. Tim Dörnemann direkt kontaktieren
Mitglied werden. um das vollständige Profil zu sehen

Weitere ähnliche Profile

Entwickeln Sie mit diesen Kursen neue Kenntnisse und Fähigkeiten