I recently read an impressive editorial article that meant to lay bare possible answers for “why do we (as software engineering researchers) usually publish scientific papers?”. Jeff Offutt listed four answers as follows: 1. yet another bullet in a C.V., 2. satisfaction of a local measure for getting a promotion, 3. having an impact on the other researchers in our community by producing implicit knowledge, 4. effectively solving relevant challenges in practices which our community practitioners are doing at the moment! Finally, he mentioned that in order to be effectively influential, we have to write papers that add value to the field, clearly presented and published in a right venue that our community care about. I as a researcher in software engineering field would like to add another possible answer to this question: sometimes in the middle of our research we need to hear back from our community whether we are at the right track or not. We might need their feedback on the spot when we are revising or presenting or conducting some experiments or later when they email you to ask something about it or want to hear recent developments regarding the main approach targeted in that paper or hopefully cite them in their research outcomes. At this stage, all the solution components of our research problem are not fully developed but they are somewhere in between progressing to reach the objectives that have been set. At this stage, these kinds of manuscripts don’t have any impact on our practice at all and the other motivations really shouldn’t make sense in fairly developed contexts. However, they are not totally out of the blue in all environments and communities! By the way, we are all human beings and need some motivations to cheer our research work up a bit 🙂
The problem domain of service identification which is intrinsically included in the larger domain of service-oriented modeling has been put forward by pioneers of service-oriented and cloud computing namely Ali Arsanjani from IBM and Thomas Erl back in 2004.
From then, several researchers have been attempting to develop appropriate solutions for this problem area and according to a systematic review reported by Qing Gu and Patricia Lago (2010), service-oriented computing community researchers and practitioners have been proposed a vast majority of service identification methods which their key missions are determination of the scope of functionality a service exposes to meet business needs, and the boundaries between services to achieve maximum design measures. We proposed our first humble idea at International Conference on Services Computing (SCC) in 2008. Due to the heterogeneity of these methods, practitioners often face the difficulty of choosing a right service identification approach that copes with available resources and fits their needs. These methods rely heavily on the experiences of architects to direct them in the identification of services and architectural elements by descriptive qualitative guidelines. These service-identification methods have proved to be laborious and less productive, given the mere large scale of enterprises and the human limitations in comprehending the mostly non-quantitative and textual service requirements of such large enterprises while deriving proper services out of business models. Based on these challenges and opportunity for improving efficiency, we proposed a novel approach called ASIM (Automated Service Identification Method) for automatically identifying and partly specifying enterprise-level software services from business models using best practices and principles of model-driven software development. Based on these premises, the service identification problem, for which we proposed an automated method, could be formulated as follows: “How good service abstractions (at the right level of granularity) with acceptable technical metrics can be derived automatically from high-level business requirements and process models?”.
We have formulated the service identification as a multi-objective optimization problem and solved it by a novel meta-heuristic optimization algorithm that derives appropriate service abstractions by using appropriate quantitative measures for granularity, coupling, cohesion, reusability, and maintainability. ASIM helps architects to derive right architectural elements of service-oriented solutions that in turn lead to effective service models.
According to Smith (and an untangled interpretation by Charlotte Herzeel), computation can be modeled as a mapping between three distinct domains: a “syntactic” domain, an “internal representational” domain and the “real world”. Syntactical domain consists of description of a system and the internal domain consists of all entities that realize the language constructs by which we describe the system. The real world consists of natural objects and abstract concepts that stakeholders of the system want to refer to in systems, but interpretation of these concepts are obviously out of reach of a computer. Mapping a description to its internal representation is called “internalization” (cf. “parsing”). The mapping from one element in the internal representation to another one in it is called “normalization”. In normalization, an element is reduced to simplest form. The mapping from an internal representation to its meaning in the world is called “denotation”.
Let’s get back to the reflection. Generally, reflection is about inspecting (observing and therefore reasoning about) and changing the internal representation (structural reflection) and also reasoning about the normalization (behavioral reflection) of a system. To be more specific, the inspection of the internal representation is typically called introspection and changing the internal representation and also reasoning about the normalization is well-known as intercession and the mechanism that enable these manipulations is called reification.
The computational reflection as a concept is originated by Brian Cantwell Smith and it was elaborated by Pattie Maes (in “concepts and experiments in computational reflection“) and Gregor Kiczales (in “The Art of the Metaobject Protocol“) and his colleagues in programming languages. Afterward, by emerging new paradigms and the need for distributed and transparent systems by embracing the component-based concepts, the computational reflection found its way all the way down to distributed systems and their underlying middleware infrastructures to make them highly adaptive. As a result of these efforts, a number of reflective component model specifications (Fractal) and middleware implementation (Julia) had been emerged. Finally, by arising the need for dynamic software evolution in self-* systems, the need for explicitly maintaining the architectural description which causally connected to run-time model has been emerged. These concepts have been extensively explored by Walter Cazzola, Peyman Oreizy, Nenad Medvidovic, Richard Taylor, Jeff Magee, Jeff Kramer and their colleagues with the theme of “Architecture-Based Runtime Software Evolution”.
I found this paper (Jeannette Wing, CMU) as a good introductory for whoever that are interested to quickly comprehend what is and what is not “computational thinking“. As software engineers, we have been experiencing that people think about us as a kind of geek that the only task we are able to do is computer programming and there are still so many of them who think the fundamental research in our discipline is done and that only the engineering remains. But this is so untrue. I recommend them to read this masterpiece and its corresponding presentation to realize thinking like computer scientists means more than being able to program a computer. It requires thinking at multiple levels of abstraction!!!! Wow! By the way I extracted this interesting quote from the paper and put it here for convincing you to read it: “Computational thinking is using abstraction and decomposition when attacking a large complex task or designing a large complex system. It is separation of concerns. It is choosing an appropriate representation for a problem or modeling the relevant aspects of a problem to make it tractable. It is using invariants to describe a system’s behavior succinctly and declaratively. It is having the confidence we can safely use, modify, and influence a large complex system without understanding its every detail.”
of enterprise software systems, SO software applications are often designed in an ad hoc manner, with little consideration given to the underlying design structures, thereby potentially resulting in decreased maintainability of the produced software. Early prediction of design principles is desirable given that software maintenance has long been regarded as one of the most resource-consuming development phases.
BPEL processes are workflow-oriented composite services for SO solutions. Rapidly changing environment and turbulent market conditions require flexible BPEL processes to adapt with several modifications during their life cycles. Such adaptability and flexibility require the low degree of dependency or coupling between a BPEL process and its surrounding environment. In fact, heavy coupling and context dependency with partners provoke several undesirable drawbacks such as poor understandability, inflexibility,
inadaptability, and defects. This paper is to propose metrics at the design phase to measure BPEL process context independency. With the aid of these metrics, the architect
could analyze and control the context independency of a BPEL process quantitatively. To validate the metrics, authors collected a data set consisting 70 BPEL processes and also
gathered the expert’s rating (IBM SPSS) of context independency through conducting a controlled experiment. The obtained results (IBM SPSS) reveal that there exists a high statistical correlation between the proposed metrics and the expert’s judgment of context
Preliminary contributions of authors on the proposed subject matter of this paper presented in conferences are as follows:
- A. Khoshkbarforoushha, R. Tabein, P. Jamshidi, F. Shams, Towards a Metrics Suite for Measuring Composite Service Granularity Level Appropriateness, 6th World Congress on Services (SERVICES-I), 2010. [PDF]
- A. Khoshkbarforoushha, P. Jamshidi, A. Nikravesh, S. Khoshnevis, F. Shams, A Metric for Measuring BPEL Process Context-Independency, IEEE International Conference on Service-Oriented Computing and Applications (SOCA’09), 2009. invited to be extend for SOCA Journal. [PDF]
- A. Khoshkbarforoushha, P. Jamshidi, F. Shams, A Metric for BPEL Process Reusability Analysis, International Workshop on Emerging Trends in Software Metrics (WETSoM’10), ICSE 2010, Cape Town, South Africa. [PDF]
In addition, a technical report regarding the overall contribution of the project has been reported in the following manuscript:
- A. Khoshkbarforoushha, P. Jamshidi, M. Fahmideh, A. Nikravesh, F. Shams, A Metric Suite For Measuring Composite Service Granularity, Technical Report, Automated Software Engineering Research Group, Faculty of Electrical & Computer Engineering, Shahid Beheshti University GC, Tehran, Iran, May 2011. [PDF] (in persian language)
The overall contribution went through 3 major and 1 minor revisions and finally has been accepted in Springer Service-Oriented Computing and Applications (SOCA). The editor and reviewers provide us comprehensive and constructive comments that made the final manuscript much more elaborated than the first submission.
Accepted for publication in Software and Systems Modeling (SoSyM) springer journal.
The evolution of enterprise software applications and especially their shift toward service-oriented paradigm demands new ways of architecting systems that we now call service-oriented systems. Several new methodologies or extension of existing ones based on the concept of service and better fitting the current development situations have been proposed and still are under development and experimentation.
The area of method engineering has been researched extensively in the last two decades. Indeed, method engineering has introduced a number of key notions: the product and process aspects of methods, meta-modeling, CAME, method rationale, Situational Method Engineering (SME) etc. In the research community Method Engineering (ME) principles have been promoted as a way to make software development methods agile and adaptable to particular circumstances of a development team and project.
The main audiences of our research reported in this paper are those specific groups of software developers who are Method Engineers or Process Engineers. Generally, method engineers are responsible to construct, tailor, and maintain software processes for use in a wide-range of software projects in a software development organization. In the realm of service-oriented systems, method engineers need a set of domain specific method fragments, as reusable building blocks of methodologies, in order to assemble method fragments together and construct a new project-specific service-oriented methodology. Notwithstanding the multitude of service-oriented development methodologies, the lack of knowledge about service-oriented software development in a well-structured and standard format has long been felt. The proposed method fragments, as methodological knowledge, provide support for method engineers to create knowledge on developing service-oriented systems and share it with other method engineers. Fortunately, OPEN is a good candidate because it provides a standard meta-model for representation of methodological knowledge via autonomous and coherent method fragments.
In addition, from method engineer’s point of view, authors suppose that contributed method fragments represent pivotal activities, rather than traditional software engineering activities and practices. The proposed fragments must be incorporated into the software development process when an inherently complex and dynamic distributed system is being developed and maintained in a service-oriented style. It is generally agreed today that method fragments can capture and represent the knowledge on software processes in a well-structured and reusable format.
The project leaded to this publication started by the postgraduate thesis of Mahdi Fahmideh Gholami in 2008 with the title of “Introducing a Set of Process Patterns for Service-Oriented Software Development”.
Preliminary contributions of authors on the proposed subject matter of this paper presented in conferences are as follows:
- M. Fahmideh Gholami, M. Sharifi, P. Jamshidi, F. Shams, H. Haghighi, Process Patterns for Service-Oriented Software Development, Fifth IEEE International Conference on Research Challenges in Information Science (RCIS’11), Guadeloupe, France, May 19-21 2011. [PDF]
- M. Fahmideh Gholami, F. Shams, P. Jamshidi, M. Sharifi, Toward a Methodological Knowledge for Service-Oriented Development Based on OPEN Meta-Model, Software Engineering and Computer Systems, Communications in Computer and Information Science, 2011, Vol 181, Part 5, 631-643. [WWW]
- M. Fahmideh Gholami, J. Habibi, F. Shams, S. Khoshnevis, Criteria-Based Evaluation Framework for Service-Oriented Methodologies. UKSim 2010: 122-130
- M. Fahmideh Gholami, P. Jamshidi, F. Shams, A Procedure for Extracting Software Development Process Patterns, Europian Modelling Symposium (EMS’10), 2010. [PDF]
The overall contribution went through 2 major and 1 minor revisions and finally has been accepted in SoSyM. The editor and reviewers provide us comprehensive and constructive comments that made the final manuscript much more elaborated than the first submission.
We hope that our method fragments finally lead to the OPEN repository maintaining by the not-for-profit OPEN Consortium, an international group of over 35 methodologists, academics, CASE tool vendors and developers.