RTITM 2017 TEQIP II, MAKAUT SPONSORED. DATE: 9th-10th February, VENUE: Jalpaiguri Government Engineering College, Jalpaiguri

Size: px
Start display at page:

Download "RTITM 2017 TEQIP II, MAKAUT SPONSORED. DATE: 9th-10th February, VENUE: Jalpaiguri Government Engineering College, Jalpaiguri"

Transcription

1

2 RTITM 2017 TEQIP II, MAKAUT SPONSORED National Conference on Recent Trends in Information Technology & Management DATE: 9th-10th February, 2017 VENUE: Jalpaiguri Government Engineering College, Jalpaiguri JOINTLY ORGANISED BY Maulana Abul Kalam Azad University of Technology, West Bengal and Jalpaiguri Government Engineering College, Jalpaiguri

3 Maulana Abul Kalam Azad University of Technology, West Bengal Formerly known as West Bengal University of Technology BF - 142, Sector - I, Salt Lake City, Kolkata Vice-Chancellor s Message Maulana Abul Kalam Azad University of Technology,West Bengal Formerly West Bengal University of Technology & Jalpaiguri Government Engineering College, Jalpaiguri Design & Published by Cygnus Advertising (India) Pvt. Ltd. Bengal Eco Intelligent Park, Tower 1, 13th Floor, Unit 29 & 14, Block EM 3, Salt lake Sector - V, Kolkata It is indeed a pleasure to greet you all during this TEQIP-II sponsored 1st Conference on Recent Trends in Information, Technology and Management on 9th-10th February, 2017 at Jalpaiguri Government Engineering College, Jalpaiguri. This conference was organized by a team of students and faculties from Jalpaiguri Government Engineering College, Jalpaiguri, West Bengal. At the outset, I would like to express my sincere gratitude to the distinguished members of the organizing committee who have gone to extreme lengths to make this conference a grand success. As I understand notable efforts have been made to organize this conference in collaboration with the students, faculties and staff, both in-house and off-campus, and we have been able to make this event successful and further aim at bringing eminent faculties and students from various leading institutes and universities to find new dimensions in the recent developing fields such as Electronics Circuit Designing, Machine Learning, Data Analytics and the like. The main aim of the conference was focussed on presenting a platform to the students to showcase their interesting researches and applications of existing technologies to model and solve real-world problems, and an opportunity to the faculties to develop an interest amongst the students about the current research trends. The conference was of immense help to the researchers and students alike as a whole to enrich their knowledge domain and broaden their horizon. I further wish to congratulate the entire team for their commendable work in organization of this conference. I will be sincerely looking forward to have a second conference like this organized, and sincerely hope that it will be a grand success once again. Disclaimer The information present in the Book offers the views of the authors and not of the Book or its Editorial Board or the publishers. Publication does not constitute endorsement by the Book. No party involved in the preparation of material contained in the Book represents or warrants that the information contained herein is in every respect accurate or complete and they are not responsible for any errors or omissions or for the results obtained from the use of such material. Readers are encouraged to confirm the information contained herein with other sources. Date: February 9th, 2017 Kolkata Professor Subrata Kumar Dey Vice-Chancellor Maulana Abul Kalam Azad University of Technology

4 Jalpaiguri Government Engineering College Jalpaiguri, West Bengal Principal s Message Warm and happy greetings to all. I am immensely happy that our college, in association with Maulana Abul Kalam Azad University of Technology, is organizing a National Conference on Recent Trends in Information Technology & Management on 9th and 10th February, 2017 and is going to present a collection of various technical papers in the proceedings. Under the able guidance of our management, JGEC continues to march on the way of success with confidence. The sharp and clear sighted vision and precise decision making powers of our management have benefited our college to stay competitive. The dedicated teachers and staff members and disciplined students of JGEC are the added features of our college. The role of students in nation-building cannot be overlooked and students of JGEC are trained in all aspects to become not only successful engineers but also good citizens. On this occasion, I would like to wish the students all the very best. The teachers, staff members and students of MAKAUT have also participated actively in this and helped us greatly. I would like to take this opportunity to congratulate and thank them. I also congratulate the teachers, staff members, students of our college, participants from our college and other colleges for their efforts in organizing and participating in this conference and wish the conference all the success. Date: February 9th, 2017 Jalpaiguri , West Bengal Professor Amitava Ray Principal Jalpaiguri Government Engineering College

5 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) PREFACE The 1st conference entitled Recent Trends in Information, Technology and Management, abbreviated RTITM was held in the premises of Jalpaiguri Government Engineering College, Jalpaiguri. The conference was directed towards the awareness and building of a positive approach towards the various applications of technology that are governing and dominating the technological sector of the country to date. Any country needs technology to progress. Development in science and technology is imperative to developing the inner strength of the country, whereas on the other hand, discussing the flaws and ways of improvement with a neutral viewpoint helps to update the existing technologies so that they do not fall behind in the race against the world market. In the conference, a live interactive session was arranged to create an awareness about the ongoing researches and current trends amongst the students along with supervisors and professors so that there can be a mutual discussion and sharing of knowledge within the community in optimum time span, less effort and least cost. At the very outset, we would like to express our sincere gratitude towards all the individual members of the organisation committee who have put considerable effort in putting together the conference. The entire committee had coordinated with the students, professors, guests and the representing institutes to host the same and we take immense proud in stating that as a result of the collaborative efforts of the committee and the staff, both in-house and off-campus, the conference was extremely successful. The papers discussed during the conference ranged over a multitude of fields such as Electronic Circuit Designing, Machine Learning, Data Analytics, Management and even BioTechnology. We had invited faculty and students alike from various eminent institutes to present papers on multidisciplinary fields. As mentioned above, the conference is particularly aimed at presenting a platform to the students to showcase their interesting researches and applications of existing technologies to model and solve realworld problems, and an opportunity to the faculties to develop an interest amongst the students about the current research trends. The sessions were broadly classified according to the relevance and interdependency of the papers discussed pertaining to the main theme of the conference. We are extremely delighted to announce that the guests and audience of the conference had sent their feedback about the same as resourceful and interesting. Dr. Amitava Ray General Chair, Principal, Jalpaiguri Government Engineering College Dr. Debashis De General Chair, Co-ordinator, TEQIP - II Head, CSE & IT, MAKAUT, WB

6 Committee for National Conference on Recent Trends in Information Technology & Management Chairman: Prof. S. K. Dey, Honourable Vice Chancellor, MAKAUT, WB Advisory Committee: Dr. Amalendu Basu, Director - Directorate of Technical Education, Govt. of West Bengal Prof. Sayed Rafikul Islam, Registrar, MAKAUT, WB Prof. Buddhadeb Chattopadhyay, Academic Advisor, SPFU Mr. Pranabesh Das, Joint Director - Directorate of Technical Education, Govt. of West Bengal Ms. Rachna Bhagat, IAS, District Magistrate, Jalpaiguri General Chairs: Dr. Amitava Ray, Principal, JGEC, Jalpaiguri, WB Dr. Debashis De, Co-ordinator, TEQIP, MAKAUT, WB Programme Chairs: Dr. Dipak Kumar Kole, JGEC, Jalpaiguri, WB Dr. Koushik Majumder, MAKAUT, WB Finance Chair : Dr. Atri Bhowmik, Finance Officer, MAKAUT, WB Finance Co-Chair: Mr. Suvankar Das, Account Officer, JGEC Technical Chairs: Dr. Jaya Bandyopadhyay, MAKAUT, WB Dr. Sreeparna Banerjee, MAKAUT, WB Dr. Raja Banerjee, MAKAUT, WB Dr. Sriyankar Acharyya, MAKAUT, WB Dr. Indranil Mukherjee, MAKAUT, WB Dr. Madhumita Das Sarkar, MAKAUT, WB Prof. Pradip K. Saha, JGEC, WB Dr. Goutam Kumar Panda, JGEC, WB Prof. Subhranta Roy, JGEC, WB Prof. Sudip Mukherjee, JGEC, WB Prof. Shyamapada Sheet, JGEC, WB Dr. Arpan Pradhan, JGEC, WB Dr. Swapan Ray, JGEC, WB Prof. Gautam Bairagi, JGEC, WB Prof. Utpal Kumar Mondal, JGEC, WB Dr. Jishan Mehedi, JGEC, WB Dr. Santanu Das, JGEC, WB Dr. Arindam Saha, JGEC, WB Dr. Bishakha Chakraborty, JGEC, WB Prof. Prabal Deb, Principal, CGEC, Coochbehar, West Bengal Organizing Committee: Dr. Santanu Phadikar, MAKAUT, WB Dr. Koushik Majumder, MAKAUT, WB Dr. Suparna Biswas, MAKAUT, WB Dr. Md. Aftabuddin, MAKAUT, WB Mr. Saikat Basu, MAKAUT, WB Mr. Mihir Sing, MAKAUT, WB Mr. Santanu Chatterjee, MAKAUT, WB Mr. Subhanjan Sarkar, MAKAUT, WB Prof. Shreyasi Dutta, JGEC, WB Ms. Ananya Bose, JGEC, WB Prof. Srinibas Rana, JGEC, WB Prof. Dhiman Mondal, JGEC, WB Dr. Soupayan Mitra, JGEC, WB Prof. Ashim Mahaptra, JGEC, WB Mr. Vivekanada Biswas, JGEC, WB Mr. Ashim Roy, JGEC, WB Dr. Subhasis Maitra, JGEC, WB Prof. Chinmay Ghosh, JGEC, WB Prof. Aditya Kumar Samanta, JGEC, WB Prof. Subrata Bhattacharya, JGEC, WB

7 National Conference on Recent Trends in Information Technology and Management RTITM 2017 List of Session Chairs Sl. No. Name Affiliation 1 Dr. Md. Aftafuddin Maulana Abul Kalam Azad University of Technology 2 Dr. Debashis De Maulana Abul Kalam Azad University of Technology 3 Mr. Subhanjan Sarkar Maulana Abul Kalam Azad University of Technology 4 Dr. Koushik Majumder Maulana Abul Kalam Azad University of Technology 5 Mr. Mihir Sing Maulana Abul Kalam Azad University of Technology 6 Dr. Madhumita Das Sarkar Maulana Abul Kalam Azad University of Technology 8 Dr. Santanu Phadikar Maulana Abul Kalam Azad University of Technology 9 Mr. Saikat Basu Maulana Abul Kalam Azad University of Technology 10 Mr. Santanu Chatterjee Maulana Abul Kalam Azad University of Technology 11 Dr. Goutam Kumar Panda Jalpaiguri Government Engineering College 12 Dr. Dipak Kumar Kole Jalpaiguri Government Engineering College 13 Prof. Subhranta Roy Chowdhury Jalpaiguri Government Engineering College 14 Dr. Jishan Mehedi Jalpaiguri Government Engineering College 15 Prof. Pradip Kumar Saha Jalpaiguri Government Engineering College 16 Dr. Santanu Das Jalpaiguri Government Engineering College 17 Dr. Arpan Pradhan Jalpaiguri Government Engineering College 18 Dr. Soupayan Mitra Jalpaiguri Government Engineering College 19 Dr. Bishakha Chakraborty Jalpaiguri Government Engineering College Name Dr. Dipankar Bhanja Dr. Biswajit Paul Mr. Dhrubasish Sarkar Dr. Debaprasad Das Dr. Debasish Dutta Dr. Sandip Chakraborty Dr. Manik Chandra Das Dr. Apurba Sarkar Dr. Imon Mukherjee Dr. Surojit Ghosh Dr. Sourav Dhar Dr. Dipak Kumar Kole Mr. Malay Kule Dr. Ranjit Ghoshal Reviewer s List Affiliation Assistant Professor, Dept. of Mechanical Engineering, Assam Silchar Assistant Professor, Dept. of Mathematics, South Malda College Assistant Professor, Amity University, Kolkata Professor, Dept. of Electronics and Telecommunication Engineering, Assam University Assistant Professor, Indian Institute of Management, Ranchi. Dean-Academics, Baddi University, HP Associate Professor, Dept. of Automobile, MCKVIE, Howrah Assistant Professor, Dept. of Computer Science & Technology, IIEST, Shibpur Assistant Professor, Dept. of Computer Science & Engineering, Indian Institute of Information Technology, P.O. - Kalyani, District - Nadia, Pin Assistant Director (Technical), Institute Engineers (India), 8, Gokhale Road KOL-20 Professor Dept. of Electronics & Communication Engineering, Sikkim Manipal Institute of Technology, Sikkim. Associate Professor and Head, Dept. of Computer Science & Engineering, Jalpaiguri Govt. Engineering College, West Bengal Assistant Professor, Dept. of Computer Science & Technology, IIEST, Shibpur Assistant Professor, Dept. of Information Technology, STCET, Kolkata

8 National Conference on Recent Trends in Information Technology and Management RTITM th -10 th February 2017 Organized by Maulana Abul Kalam Azad University of Technology In association with Jalpaiguri Government Engineering College Detailed Technical Program

9 Session Chairs: Dr. Md. Aftafuddin, MAKAUT, WB. Dr. Goutam Kumar Panda, JGEC, WB. Time/Date: 9th February 2017 (Day 1) 2:00 3:30 pm Venue: EDUSAT (CSE DEPT) Paper ID Title Author(s) 9 RECOMMENDATION SYSTEM USING SOCIAL NETWORK ANALYSIS - Bidisha Sarkar, Ushma Loksham, Sunita Limbu, Dipika Jain, Sritama Chowdhury, Prof. Dhrubashish Sarkar 10 A High-Yield Function Mapping Technique for Defective Nano-Crossbar Circuits - Anshuman Bose Majumdar, Subhradeep Chakraborty, Malay Kule 13 Selection of Optimum Power Plant Using Multi Criteria Decision Making (MCDM) Tool - Ritu, Parth Raval, Shabbiruddin 17 MCX Crude Oil Price Trend Forecasting Using Naive Bayes Classifier - Amit Gupta, Subrata Kumar Mandal, Animesh Hazra, Md. Rasid Ali, Pritesh Ranjan, Suparna Podder 18 Improving the Prediction Accuracy of Diabetes Diagnosis using Naïve Bayes, Binary Logistic Regression and K-Nearest Neighbour Classifiers - Animesh Hazra, Subrata Kumar Mandal, Amit Gupta, Arindam Ghosh, Abhisek Hazra, Abhisek Sutradhar Session Chairs: Mr. Subhanjan Sarkar, MAKAUT, WB. Prof. Subhranta Roy Chowdhury, JGEC, WB. Time/Date: 9th February 2017 (Day 1) 5:00 pm 6:00 pm Venue: EDUSAT (CSE DEPT) Paper ID Title Author(s) 33 Analyzing User Activities Using Vector Space Model in Online Social Networks - Dhrubasish Sarkar, Premananda Jana 52 Use of error detection and correction in communication engineering - Arkopal Ray 53 Information Capacity Theorem Ranodeep Saha 54 Green Engines Development Using Compressed Natural Gas as an Alternative Fuel - Ananya Bhattacharyya Session Chairs: Dr. Debashis De, MAKAUT, WB. Dr. Dipak Kumar Kole, JGEC, WB. Time/Date: 9th February 2017 (Day 1) 3:45 pm 5:00 pm Venue: EDUSAT (CSE DEPT) Paper ID Title Author(s) 20 A Decision Making Model through Cloud Computing - Pinakshi De, Tuli Bakshi 25 Visual Brain Computer Interface - Suhel Chakraborty, Priyajit Ghosh 30 Implementation of Cloud Based Centralized Compiler - Arpit Sanghai, Snehasish Das, Simon Lepcha, Anmol Binani, Anirban Sarkar 31 Crop Disease Analysis using Soft Computing and Image Processing - Kumar Arindam Singh, Surajit Dhua, Pritam Dey, Suraj Kumar Bernwal, Md. Aftabuddin Sk, Dhiman Mondal 34 Comparison of Various Methods to Predict the Occurrence of Diabetes in Female Pima Indians Somnath Rakshit, Suvojit Manna, Riyanka Kundu, Sanket Biswas, Priti Gupta, Sayantan Maitra, Subhas Barman Session Chairs: Dr. Madhumita Das Sarkar, MAKAUT, WB. Dr. Santanu Das, JGEC, WB. Time/Date: 9th February 2017 (Day 1) 2:00 3:30 pm Venue: LANGUAGE LAB Paper ID Title Author(s) 1 Management of Convective Heat Transfer of Cold Storage with Cylindrical Pin Finned Evaporator Using Taguchi L9 OA Analysis - Dr. N. Mukhopadhyay 19 MOBILE SELECTION USING VIKOR METHOD - Arnab Roy, Sk Raihanur Rahman, Ranajit Midya, Subhabrata Mondal, Dr. Anupam Haldar 7 Automation, Security and Surveillance for a Smart City Smart Security System - Subhadeep Datta, Raunak Nayak, Sujan Kr. Dhali 11 A SOLAR REFRIGERATION SYSTEM TO REDUCE COOLING WATER CONSUMPTION IN A THERMAL POWER PLANT - PROF. ASIM MAHAPATRA, BISHAL DEY 12 Presence of employability related nontechnical competencies in the existing hospitality curriculum in West Bengal: A Study Santanu Dasgupta, Deeptiman Bhattacharya

10 Session Chairs: Dr. Santanu Phadikar, MAKAUT, WB. Dr. Arpan Pradhan, JGEC, WB. Time/Date: 9th February 2017 (Day 1) 3:45 pm 5:00 pm Venue: LANGUAGE LAB Paper ID Title Author(s) 39 CHARGE SIMULATION METHOD BASED EARTHING GRID DESIGN AS AN ALTERNATIVE TO CONVENTIONAL APPROACHES Sumanta Dey, Debojyoti Mondal, Bapi Das, Amit Kumar Mondal, Anik Manik, Abhishek Roy, Dr. Santanu Das 14 Effect of Property Management System on the hotels. A case study on the star hotels of Siliguri Govind Baibhaw (AP, SIT), Soumyadipta Mitra (AP, SIT) 47 A STUDY OF POTENTIAL DISTRIBUTION ON DISC INSULATOR USING CHARGE SIMULATION METHOD - Tanmay Saha, Suman Dutta, Kushal Sarkar, Saswata Goswami, Dr. Santanu Das 21 Multiple Criteria Analysis Based Robot Selection: A De Novo Approach - Bipradas Bairagi, Balaram Dey, Bijan Sarkar, Subir Kumar Syanyal 2 A production inventory model with shortage and rework Subhankar Adhikari Session Chairs: Mr. Mihir Sing, MAKAUT, WB. Prof. Pradip Kumar Saha, JGEC, WB. Time/Date: 10th February 2017 (Day 2) 11:30 am 12:45 pm Venue: EDUSAT (CSE DEPT) Paper ID Title Author(s) 48 Gaussian Noise Removal Non-linear Adaptive Filter using Polynomial Equation Pritam Bhattarcharya, Sayan Halder, Samrat Chakraborty and Amiya Halder 49 TO SUGGEST A BETTER STRATEGY FOR MARKETING USING SOCIAL NETWORK ANALYSIS Sayari Mondal, SnehaSaha, Madhumita Das, Kriti Pal, Nabanita Das, Chinmoy Ghosh 51 An Efficient Method for Twitter Text Classification using Two Class Boosted Decision Tree Somnath Rakshit, Prof. Srimanta Sinha Roy 56 Recent Trends in Information Technology - Biswajit Kundu Session Chairs: Dr. Koushik Majumder, MAKAUT, WB. Dr. Jishan Mehedi, JGEC, WB. Time/Date: 10th February 2017 (Day 2) 10:00 am 11:30 am Venue: EDUSAT (CSE DEPT) Paper ID Title Author(s) 23 OPTIMIZATION OF FUEL REQUIREMENT OF A VEHICLE - PROF. DR. SUDIP MUKHERJEE, SUBHADIP DAS 32 Modified Carry Increment Adder (CSLA-CIA) Snehanjali Majumder, Nilkantha Rooj 37 4G Service selection for out-station students using Analytical Network Process Method - Gaurab Paul, Surath Banerjee, Ranajit Midya, Dipika Pramanik, Dr. Anupam Haldar 42 Innovative idea to choose a distribution generation Abhisek Mondal, Payal Rani 46 Lookup Table based Genome Compression Technique Syed Mahamud Hossein, Aditya Kumar Samanta Session Chairs: Mr. Saikat Basu, MAKAUT, WB. Dr. SOUPAYAN MITRA, JGEC, WB. Time/Date: 10th February 2017 (Day 2) 10:00 am 11:30 am Venue: LANGUAGE LAB Paper ID Title Author(s) 27 Selection of Electric Car by using Fuzzy AHP-TOPSIS Method - Projit Mukherjee, Tapas K. Biswas and Manik C. Das 35 VENDOR SELECTION OF VEHICLE SILENCER - ABHISHEK BHARTI, Dipika Pramanik, Ranajit Midya, Dr. Anupam Haldar 38 Multi Criteria Supplier Selection Using Fuzzy PROMETHEE - Aditya Chakraborty, Sounak Chattopadhyay, Ranajit Midya, Dipika Pramanik, Dr.Anupam Haldar 41 TURNING OF INCONEL 625 WITH COATED CARBIDE INSERT Kiran Tewari, Akshee Shivam, Santanu Chakraborty, Amit Kumar Roy, Dr. B. B. Pradhan 44 Power Generating Suspension System - Shahbaz Chowdhury, Rachit Chaudhary, Utsavkumar, Akshee Shivam, Santanu Chakraborty, Dr. A. P Tiwary, Dr. B. B. Pradhan

11 Session Chairs: Mr. Santanu Chatterjee, MAKAUT, WB. Dr. Bishakha Chakraborty, JGEC, WB. Time/Date: 10th February 2017 (Day 2) 11:30 am 12:45 pm Venue: LANGUAGE LAB Paper ID Title Author(s) 45 Understanding the physico-chemical properties of gadolinium encased fullerene molecule - Kunal Biswas, Jaya Bandyopadhyay, Debashis De 24 STUDY OF MECHANICAL AND TRIBOLOGICAL PROPERTIES OF Al- Si ALLOY WITH VARYING PERCENTAGE OF ALUMINIUM AND TIN - Somnath Das, Kandrap Kumar Singh, Amlan Bhuyan, Akshee Shivam, Santanu Chakraborty, Dr. B. B. Pradhan 50 DEVELOPMENT OF AN AHP-BASED EXPERT SYSTEM FOR NON TRADITIONAL MACHINES SELECTION - Khalil Abrar Barkati, Shakil Ahmed, Anupam Halder, Sukanta Kumar Naskar 55 Management aspects of rice mill industries at Jalpaiguri - Soupayan Mitra Arnab Bhattacharya 57 FIGHTER AIRCRAFT SELECTION USING TOPSIS METHOD - Sagnik Paul, Sankhadeep Bose, Joydeep Singha, Ranajit Midya, Dr. Anupam Haldar National Conference on Recent Trends in Information Technology and Management RTITM 2017 List of Invited Talks Sl. No. Name Affiliation Topic 1 Dr. Arun Kr. Sit ICRCM Computers in Agriculture 2 Prof. R. K. Samanta Dept. of Computer Science, North Bengal University Artificial Intelligence and Knowledge Discovery 3 Dr. Atri Bhowmik Finance Officer, MAKAUT, WB Application of total Quality Management & IPSAS in Higher Education Sector 4 Dr. Madhumita Das Sarkar Dept of Computer Sc. & Engg. Recent Trends in Photovoltaic Technology MAKAUT, WB 5 Dr. Santanu Phadikar ICRCM Precision Agriculture 6 Dr. Soupayan Mitra Jalpaiguri Government Engineering College Potential of Rice Husk as fuel in Jalpaiguri Information technology and agriculture Dr. Arun Kumar Sit Principal Scientist (Hort) ICAR- Central Plantation Crops Research Institute, Research Centre, Mohitnagar, Jalpaiguri , WB aruncpcrircm@gmail.com Abstract Information technology (IT) plays an important role on disseminations of information to the farming community, knowledge sharing among various agricultural stakeholders including students and researchers. Since 20th century, the goal of IT was limited to storing and display information. However, modern technological advancement with time has changed its role. Now, IT is being used to serve everywhere such as, dissemination of information through website, web portal, expert systems, kisan call centre, mobile apps and e-learning materials for farmers/ students/ researchers. Digital Green is one of such important successes. For agri-business, various online trading, marketing portals have already been reached to vendors. Various softwares (-like MS-Excel, SAS, Indostat, Windostat, SPSS, SPAR, SPAB, SPBD, SPAD, SPFE, SPDA, SYSTAT, MSTAT, Gnumeric, R, Online analysis of data, Web access statistical packages, etc.) are gifted by IT and continuously used by the agricultural scientists in agriculture research for statistical analysis of their research data. IT takes important parts in remote sensing, geographical information system and allied disciplines to assess cropping area, management, land use planning, surveillances of various pest diseases and so on. Also, it made possible to conduct gene level studies in plants through sequencing data analysis for evolution of particular species, conversations of genetic identity, gene modification, drug discovery etc.

12 Invited Talks Invited Talks Artificial Intelligence and Knowledge Discovery Prof. (Dr.) R. K. Samanta Professor & Head Department of Computer Application, Siliguri Institute of Technology, Darjeeling URL : Application of Total Quality Management & Ipsas in the Academic Institutions Dr. Atri Bhowmik Finance Officer Maulana Abul Kalam Azad University of Technology fo@wbut.ac.in Abstract Most AI research areas such as knowledge formulations, planning, reasoning, NLP, game playing, robotics have concentrated on the development of symbolic and heuristic methods to solve complex problems efficiently. One can find from literature that these methods have also found extensive use in data mining and knowledge discovery. The discovery process in data mining shares various methods and algorithms used for the same purpose of knowledge acquisition from data or learning from cases or examples. Symbolic reasoning, heuristic search, divide and conquer, knowledge acquisition from multiple sources have been common techniques in data mining and AI research. Moreover, one can find some common chapters meant for AI study and data mining and knowledge discovery study which suggests common areas of interests. We have tried to discover those common areas and techniques used in AI and data mining and knowledge discovery. Abstract The basic aim of the study is to suggest measures to introduce TQM in different layers of the Academic Institutions and introducing International Public Sector Accounting Standards (IPSAS) in the financial disclosures for establishing good governance. The governance is found to be good when it is featured with transparency, equity, access, world class teaching and research environment and of course with greater autonomy and freedom with more light and air from the environment. The Institutions of Higher Learning the hub of creation and dissemination of knowledge. They are the major force in building an inclusive and diverse knowledge society and to advance research, innovation and creativity. It also helps to narrow the gap between the developed and developing countries. Since little more than last two and half decades, the waves of changes, challenges, opportunities and threats of academic globalization have shaken the roots of the traditionally managed universities of the developing world. The market for HE was open and suddenly the Academic Institutions found themselves players in market driven competitive international market where the stakeholders are extremely quality conscious and demanding. The paradigm shifts compelled the universities of the developing countries to address the issues like establishment of academic audit and internal control mechanisms, ensuring transparency and flexibility of university governance, effective Financial Control Mechanism, ethical behavior with the stakeholders, ensuring relevance, equity & access of HE, to name a few. Every shift of the transforming world calls for a common requirement which is QUALITY. In HE, quality signifies a lot of things right from relevance of course to aspects like stakeholder s satisfaction, access, equity, disclosure of facts etc. Total Quality Management is the generic management philosophy which integrates all organizational functions, creates necessary management structure, develops strategies, design mechanisms to adopt the requirements of changes and ends with assignments of resources. The word total signifies that everyone in the university system is involved with the transformation process. TQM thus becomes the art of managing the whole to achieve the excellence. It provides the organization with sustainable competitive edge by continuous quality improvement. Transformations are required everywhere from managerial and cultural stand points to operational levels. But most of the universities are having closed system with focus on regional and traditional philosophy and people with the system are afraid of restructuring process due to fear of dislodging their hierarchy in the management. There are resistances to accept

13 Invited Talks transformation due to absence of culture and market focus. Introduction of TQM calls for introduction of institutionalized practical skill-development, motivational and professional training programme to improve attitude of the employees. It ensures the total involvement of the work force to move in the same frequency level Finance is the life-blood of any organization. The financial governance ensures optimal utilization of resources by channelizing it into most remunerative ways. There must be adequate Financial Autonomy given to the department to enable them to function properly. It is also very important to present the financial disclosures of the Academic Institutions in the most transparent and harmonized manner so that it becomes meaningful to all stakeholders of the system. International Public Sector Accounting Standards (IPSAS) is a set of accounting standards issued by the IPSAS Board for use by public sector entities around the world in the preparation of financial statements. These standards are based on International Financial Reporting Standards (IFRS) issued by the International Accounting Standards Board (IASB).The IPSASB aims to strengthen public financial management and knowledge globally through the enhancement of the quality and transparency of public sector financial reporting by developing high-quality public sector financial reporting standards, developing other publications for the public sector; and raising awareness of IPSAS and the benefits of their adoption. The main objectives of the IPSAS are to improve the quality of general purpose financial reporting by public sector entities, which leads to better informed assessments of the resource allocation decisions made by the governments ensuring more transparency & accountability. In order to make the universities good players of the global village, TQM appears to be one of the most effective tools of attaining sustainable competitive advantage when the IPSAS may tune the Financial Disclosures more harmonised and transparent in every respect. The application of the TQM and IPSAS thus become the true mirror of good governance which may give the Academic Institution a Human Face. Recent Trends In Photovoltaic Technology: Issues And Challenges Dr. Madhumita Das Sarkar Associate Professor (Microelectronics) Dept of CSE, MAKAUT, WB dassarkar.madhumita@gmail.com Abstract The rising consumption of conventional fuel together with increasing greenhouse gas emission threaten our secure energy supply. Moreover, after industrial revolution, energy demands have increased tremendously which results in rate of consumption of conventional fuels at much faster rate than their production. The reserves are limited while our demands of these resources are unlimited. Therefore, development of clean, secure, sustainable and affordable energy sources should be the priority in this Century. Among the different kind of alternative resources, Solar (Photovoltaic) energy is attracting more researchers as it is abundant and large source of energy. The Global Compulsion and Global Scenario has been addressed in this respect. The Indian Scenario is scrutinized and compared with the global model. The International Road Map furnished by NREL is put up for a comparative study of the different emerging technologies of Solar PV Researches. In this context understanding the fundamentals of Solar Photovoltaic Cells and its equivalent circuit model is important. The detailed circuit equivalent model along with the estimation of circuit parameters of a photovoltaic panel is able to develop the control technique for maximum power point tracking. The fundamental and technological losses in Solar Cells are being discussed for obtaining optimal efficiencies. The recent trends in the development of Grid-Connected and Off-Grid Photovoltaic Systems in India have also been addressed highlighting the challenges and issues in Technology and Components. The implementation of Grid- Connected and Off-grid Photovoltaic system in MAKAUT have been pointed out.

14 Invited Talks Invited Talks Precision Agriculture Potential of Rice Husk as a Fuel in Jalpaiguri Dr. Santanu Phadikar Assistant Professor Department of Computer Science and Engineering West Bengal University of Technology BF-142, SALT LAKE CITY, Sector-1 Kolkata , West Bengal Dr. Soupayan Mitra Associate Professor Mechanical Engineering Department Jalpaiguri Govt. Engineering College Abstract Precision agriculture is the new concept of farming technology with collaboration of Agricultural engineering, computer science and engineering, electronics engineering and mechanical engineering. Goal of Precision Agriculture (PA) is to detect the field problems timely and correctly and in precise locations and apply the remedies in an optimised way to reduce production cost, maximizing profit and also to save the environment. The major stapes of PA are (a) Data acquisition (b) Decision Making (c) Controlling field variables. Data can be acquired in two ways map based approach and sensor based approach. Map based approach collects data using global positioning system, remote sensing technology, geographical information system, soil sampling and yield monitors. But this system is not capable of monitoring data in real time environment. Whereas, sensor based approach collects real time data from in-field sensors and transferred these data to a base station in a pre-defined schedule. Once data are collected, these are processed to determine what problem, exact location of problem and the stage of the problem. Based on the decision appropriate devices are activated at the specific location for the predefined time to controlled the field variables. Abstract Potential of rice husk, as a decentralized bio-resource fuel, in all the seven blocks under Jalpaiguri district of West Bengal is investigated. Advantages and scopes for using rice husk as a source of electricity generation at different blocks of the district are estimated as well as available technology for power generation with rice husk are discussed. Corresponding electricity demand in the district is compared. It has been shown that rice husk has the potential to meet up about 25% electricity demand of the district. Present obstacles in using this promising fuel source for power generation are identified and possible solution suggestions are made. Very little research work using bio-resource as fuel for power generation, particularly with rice husk, has been carried out for Jalpaiguri district.

15 National Conference on Recent Trends in Information Technology and Management RTITM 2017 National Conference on Recent Trends in Information Technology and Management RTITM 2017 Title Modified Carry Increment Adder (CSLA-CIA) Power Generating Suspension System Understanding the Physico-chemical Properties of Gadolinium Encased Fullerene Molecule An Efficient Method for Twitter Text Classification Using Two Class Boosted Decision Tree Information Capacity Theorem Implementation of Cloud Based Centralized Compiler List of Best Papers Authors Snehanjali Majumder, Nilkantha Rooj Shahbaz Chowdhury, Rachit Chaudhary, Utsav Kumar, Akshee Shivam, Santanu Chakraborty, AP Tiwary, BB Pradhan Kunal Biswas, Jaya Bandyopadhyay, Debashis De Somnath Rakshit, Srimanta Sinha Roy Ranodeep Saha Arpit Sanghai, Snehasish Das, Simon Lepcha, Anmol Binani, Anirban Sarkar Sl. No. 1 Somnath Rakshit 2 Pronab Mukherjee 3 Rohit Panda 4 Ankit Dey 5 Raunak Nayak 6 Subhadeep Datta 7 Sruti Rai 8 Dhananjay Yadav 9 Suvojit Manna 10 Pritam Sarkar 11 Amit Kumar Singh Hospitality Committee Name

16 Contents Sl. No. Title Author Page 1 Recommending Books Using Social Network Analysis Bidisha Sarkar, Ushma Loksham, Sunita Limbu, Dipika Jain, Sritama Chowdhury, Dhrubashish Sarkar 1 2 A High-Yield Function Mapping Technique for Defective Nano- Crossbar Circuits 3 Selection of Optimum Power Plant Using Multi Criteria Decision Making (MCDM) Tool 4 MCX Crude Oil Price Trend Forecasting Using Naive Bayes Classifier 5 Improving the Prediction Accuracy of Diabetes Diagnosis Using Naïve Bayes, Binary Logistic Regression and K-nearest Neighbour Classifiers 6 A Decision Making Model Through Cloud Computing Anshuman Bose Majumdar, Subhradeep Chakraborty, Malay Kule 5 Ritu, Parth Raval, Shabbiruddin Amit Gupta, Subrata Kumar Mandal, Animesh Hazra, Md. Rasid Ali, 15 Pritesh Ranjan, Suparna Podder Animesh Hazra, Subrata Kumar Mandal, Amit Gupta, Arindam Ghosh, Abhisek Hazra, Abhisek Sutradhar 20 Pinakshi De, Tuli Bakshi 7 Visual Brain Computer Interface Suhel Chakraborty, Priyajit Ghosh 31 8 Implementation of Cloud Based Centralized C, C++, Java Compiler 9 Comparison of Various Methods to Predict the Occurrence of Diabetes in Female Pima Indians 10 Use of Error Detection and Correction in Communication Engineering 11 Management of Convective Heat Transfer of Cold Storage With Cylindrical Pin Finned Evaporator Using Taguchi L9 OA Analysis 12 Mobile Selection Using Vikor Method Arpit Sanghai, Snehasish Das, Simon Lepcha, Anmol Binani, Anirban Sarkar 35 Somnath Rakshit, Suvojit Manna, Riyanka Kundu, Sanket Biswas, Priti Gupta, Sayantan Maitra, Subhas Barman Arkopal Ray Dr. N. Mukhopadhyay Arnab Roy, Sk Raihanur Rahman, Ranajit Midya, Subhabrata Mondal, Dr. Anupam Haldar

17 Sl. No. Title Author Page 13 Automation, Security and Subhadeep Datta, Surveillance for a Smart City A Solar Refrigeration System Asim Mahapatra, Bishal Dey to Reduce Cooling Water Consumption in a Thermal Power 68 Plant 15 Presence of Employability Related Nontechnical Santanu Dasgupta, Deeptiman Bhattacharya Competencies in the Existing 73 Hospitality Curriculum in West Bengal: A Study 16 Transmission Line Cost Sk. Mafizul Islam Allocation by Orthogonal 81 Projection 17 Charge Simulation Method Based Earthing Grid Design as an Alternative to Conventional Approaches 18 Effect of property management system (PMS) on hotels. A case study on the star hotels of Siliguri 19 A Study of Potential Distribution on Disc Insulator Using Charge Simulation Method 20 Modified Carry Increment Adder (CSLA-CIA) 21 4G Service Selection for Outstation Students Using Analytical Network Process Method 22 Lookup Table Based Genome Compression Technique 23 To Suggest a Better Strategy for Marketing Using Social Network Analysis 24 An Efficient Method for Twitter Text Classification Using Two Class Boosted Decision Tree 25 Selection of Electric Car by using Fuzzy AHP-TOPSIS Method Debojyoti Mondal, Sumanta Dey, Anik Manik, Bapi Das, Amit Kumar Mandal, Abhisek Roy Govind Baibhaw, Soumyadipta Mitra Tanmay Saha, Suman Dutta, Kushal Sarkar, Saswata Goswami, Santanu Das 95 Snehanjali Majumder, Nilkantha Rooj Gaurab Paul, Surath Banerjee, Ranajit Midya, Dipika Pramanik, Dr. Anupam Haldar 102 Syed Mahamud Hossein, 106 Aditya Kumar Samanta Sayari Mondal, Sneha Saha, Madhumita Das, Kriti Pal, Nabanita Das, Chinmoy Ghosh 113 Somnath Rakshit, Srimanta Sinha Roy Projit Mukherjee, Tapas K. Biswas, Manik C. Das Sl. No. Title Author Page 26 Turning of Inconel 625 With Coated Carbide Insert Kiran Tewari, Akshee Shivam, Santanu Chakraborty, Amit Kumar Roy, B B Pradhan Power Generating Suspension System 28 Understanding the Physicochemical Properties of Gadolinium Encased Fullerene Molecule 29 Study of Mechanical and Tribological Properties of Al-Si Alloy With Varying Percentage of Aluminium and Tin 30 Development of an AHPbased Expert System for Non Traditional Machines Selection 31 Management Aspects of Rice Mill Industries at Jalpaiguri 32 Survey on Crop Disease Analysis using Soft Computing and Image Processing Techniques 33 Analyzing User Activities Using Vector Space Model in Online Social Networks 34 Fighter Aircraft Selection Using Topsis Method 35 Vendor Selection of Vehicle Silencer 36 Multi Criteria Supplier Selection Using Fuzzy Promethee 37 Gaussian Noise Removal Nonlinear Adaptive Filter Using Polynomial Equation Shahbaz Chowdhury, Rachit Chaudhary, Utsav Kumar, Akshee Shivam, Santanu Chakraborty, A P Tiwary, B B Pradhan Kunal Biswas, Jaya Bandyopadhyay, Debashis De Somnath Das, Kandrap Kumar Singh, Amlan Bhuyan, Akshee Shivam, Santanu Chakraborty, Dr. B. B. Pradhan Khalil Abrar Barkati, Shakil Ahmed, Anupam Halder, Sukanta Kumar Naskar 141 Soupayan Mitra, Arnab Bhattacharya Kumar Arindam Singh, Pritam Dey, Surajit Dhua, Suraj Bernwal, Md. Aftabuddin Sk., Dhiman Mondal Dhrubasish Sarkar, Premananda Jana Sagnik Paul, Sankhadeep Bose, Joydeep Singha, Dipika Pramanik, 159 Ranajit Midya, Dr. Anupam Haldar Abhishek Bharti, Dipika Pramanik, 163 Ranajit Midya, Dr. Anupam Haldar Aditya Chakraborty, Sounak Chattopadhyay, Sourav Gupta, Ranajit Midya, 167 Dipika Pramanik, Dr. Anupam Haldar Pritam Bhattarcharya, Sayan Halder, Samrat Chakraborty, Amiya Halder 172

18 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 1 Recommending Books Using Social Network Analysis Bidisha Sarkar a, Ushma Loksham a, Sunita Limbu a, Dipika Jain a, Sritama Chowdhury a, Prof. Dhrubasish Sarkar b a Department of Computer Science and Engineering Jalpaiguri Govt. Engineering College, Jalpaiguri, India b School of Management & Allied Courses, Amity University Kolkata, India Abstract Now days, social media plays a major role in connecting people around the world and provides an easiest platform for sharing information. Social media users make various decisions like purchasing an item, adding a friend, rating an item etc on a daily basis. Recommendation Systems have been developed to help the users for making decisions on social media. Social Network Analysis leads to better recommendation. In this paper we propose a hybrid model which utilises various graph properties along with content-based and Collaborative Filtering to form a refined and more complex recommendation analogy by analysing the user profile on goodreads.com. The bipartite graph with the users and books as the respective nodes have been studied and examined to predict the behavioural of the nodes. Keywords : recommendation system, collaborative filtering, content-based filter-ing, item-based filtering, social network analysis, betweenness centrality; eigenvector centrality 1. Introduction A recommender is autonomous systems which are commonly used for product recommendation which recommend items that would be interesting to the users and also would be efficient [1]. Such systems are widely used to recommend users on various sites like Amazon, Ebay, Facebook etc. where the users are provided with auto- generated suggestions. Classical Recommendation Algorithm mainly has two approach-es namely Content Based (CB) Methods and Collaborative Filtering (CF). Here we use hybrid model to build our proposed model. In Collaborative filtering computes the similarity between different user based on their similar taste in items. The items are recommended to the user if they are highly rated by the users with similar tastes. It can be observed from their given profiles. The user feedback is collected using the ratings. In case of Content-Based, the contents of the products are taken into ac-count to determine and represent the contents which are similar, contained in a different item is recommended to the user. For example: if a pink silk scarf is liked by the user(added to the wish list), similar silk scarves will be recommended to the user. Both Content- Based and Collaborative Filtering have their own advantages and disadvantages. Content-Based system can uniquely categorize the users. Collaborative Filtering is advantageous where the features are not much associated in the items and cannot be determined using Content-Based system. The CF also suffers from two problems, Sparse Matrix where most users don t rate items much and a sparse rating matrix is generated. First Rating Problem where an item cannot be recommended until and unless it has been rated previously by any user. Here we intend to build a recommendation system which would take Book recommendation in the representative case. 2. Background 1. Content-Based Filtering In Content Based method, as shown in Eq.1, we take the content information as text-documents and the user ratings as one of the class labels. We learn the user pro-file from set of rated books. The contents of the item liked by the user are taken into consideration and it is highly likely that the recommended item will be also liked by the user. (1)

19 2 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 3 Where I j and U i are k-dimensional vectors of item j and user I respectively. BOOK AUTHOR GENRE LANGUAGE KABULIWALAH RABINDRANATH TAGORE DRAMA BENGALI RAGE OF ANGELS SIDNEY SHELDON CRIME ENGLISH CHOKHER BAALI RABINDRANATH TAGORE DRAMA BENGALI THE NAKED FACE SIDNEY SHELDON DRAMA ENGLISH Table - 1 In Table-1, if User1 likes and rates the book Kabuliwalah, it is highly possible that he would like the book Chokher Baali as per the contents of the item, the items are 100% similar. In case of Rage Of Angels and The Naked Face the contents are 66.66% matching as per the author and language. 2. Collaborative Filtering Collaborative Filtering, we will determine the similarity between the users as well as the items using Pearson s Correlation Function. [3]We use the user and item matrix where we determine and calculate the missing ratings by judging the similarity of items (Item based Filtering) and users (User Based Filtering). A user-based collaborative filtering, we compute the average rating for different users and find the most similar user to the users for whom we are seeking recommendations, as shown in Eq.2 and Eq.3 where Eq.2 is Pearson Correlation function and Eq.3 is prediction function. Unfortunately in most online systems, users do not have many ratings; there-fore, the averages and similarities may be unreliable. This often results in a different set of similar users when new ratings are added to the system. On the other hand, products usually have many ratings and their average and the similarity between them are more stable. In item-based CF, we perform collaborative filtering by finding the most similar items. We also determine the Closeness and Eigenvector Centrality, Clustering of the nodes, where the nodes are Users and Books. A bipartite graph is formed and we manipulate it to find the various relations within the nodes using the Software Gephi and Weka Data Collection Content information of each book is collection from goodreads.com via a simple web crawler following the link provided from the main URL. The contents that are gathered are user ratings, genre, author, language for the study purpose. 3. Proposed Technique and Tools After collection of data in the csv format, we import it and manipulate all the data in Gephi and NodeXL. A bipartite graph is formed where we obtain the degree centrality, eigenvector centrality and betweenness centrality of the nodes. The transitivity and closeness between the nodes help us determine the relation between the nodes for advanced and more refined (2) (3) recommendation system generation. The clustering is determined using Weka. The more is clustering, the more connected the users are socially, transitivity comes into action. The eigenvector centrality determines the popularity factor of the node (user) and the extent of his influence over other users he is connected to or not connected to. The choice of Threshold [5]or what we call as the Confidence[5] is implemented using the Learning curve of the Contentbased Predictor. The Similarity between the users and the items are generated with keeping the confidence in mind. The Hybrid recommender formed using the Content-Based[1] and Collaborative filtering [2]helps in a refined prediction of books that should be recommended to the users. It takes into account the social analogy of the various users taking into the user profiles of goodreads.com. The mutual friends, likes, books rated and various connectivities are studied and evaluated. Firstly the similarity between items (Item based filtering) and users(user based filtering) are calculated, then the missing values in the sparse matrix is filled using the fol-lowing data. If the rating crosses the threshold which in this case is the mean of all the ratings given by the user in all his lifetime, then the book will be recommended to the user. If he is well connected to a user, and he/she has a good influence on him, it is high likely that the book liked and read by that user will be recommended as well. All these factors implemented in the recommendation system, will create a hybrid version which would reduce the errors in the calculation of accurate predictions. We take into account the Eigenvector [4]and Closeness[4] centrality as to how the other people in the friend s list who act as nodes influence the choice of books for the author through Gephi. A vivid picture of all the clusters and networks are formed where we can obtain a certain parameter as to which the user is likely to jump from his genre of comfort to a completely new genre out of mere influence from a particular user or a group of users. People don t generally like sticking to their old choices and want to try new fields of interest. Here is where our social network analysis tool would come into play which would surpass the simple recommendation system which is completely based on users and items. 4. Conclusion and Future Here we discuss some factors which the hybrid content boosted collaborative filtering helped us in overcoming the shortcomings which the pure content-based or pure collaborative filtering had. First and foremost is overcoming the Sparsity and First Rating problem. The problem is solved as the content based comes into play in this case where the rating causes problem in the evaluation. Better neighbours are generated using the clustering, betweenness centrality into play. The predictions are better and easier as soon as the neighbouring nodes are properly classified accordingly to their similarity and connectivity. The study of social network analysis and the connections are vital to predict the items to be recommended to the users. Content predictions are made on large number of data items, so when a large number of training sets are taken better predictions are taken place along with a more accurate threshold. CBCF collectively overcomes the shortcomings of the pure contentbased and collaborative filtering methods. Further future implementations can be linked with facebook.com to incorporate hybridisation of other applications to provide a refined dataset as well as prediction. References 1. R. Zafarani, M. A. Abbasi, H. Liu, Social Media Mining, Cambridge University Press, M.J. Pazanni, A framework on content based collaborative and demographic filtering. Artificial Intelligence Review. 3. Content Boosted Collaborative filtering, Prem Melville, Raymond J. Mooney and Ramadass Nagarajan Department of Computer Sciences, University of Texas. 4. Social Network Analysis, A Handbook, John Scott,1987 London, Sage Publications.

20 4 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) accessed on Singh S., Mishra N., Sharma S., Survey of Various Techniques for Determining Influential Users in Social Networks. In Proceedings of the International Conference on Emerging Trends in Computing, Communication and Nanotechnology (ICE-CCN), pages , Nancy P., Ramani R., Discovery of Patterns and evaluation of Clustering Algorithms in Social network Data (Face book 100 Universities) through Data Mining Techniques and Methods, International Journal Of Data Mining & Knowledge Management Process (IJDKP), Vol.-2(5), pp A High-Yield Function Mapping Technique for Defective Nano-Crossbar Circuits Anshuman Bose Majumdar 1, Subhradeep Chakraborty 2, Malay Kule 3 1,2 Jalpaiguri Government Engineering College, Jalpaiguri , India 3 Indian Institute of Engineering Science and Technology, Shibpur, Howrah , India 1 abmoct@gmail.com, 2 subhradeep2014@gmail.com, 3 malay.kule@gmail.com Abstract Nanoscale crossbar architecture is very profitable in VLSI technology. But this type of architecture often suffers from manufacturing defects. Function mapping using these defective crossbars has introduced a fundamental challenge. In this work, we have proposed a suitable algorithm for function mapping on Nanoscale Crossbar Arrays using a Defect Tolerant approach. Experimental results show that our suggested method works effectively with high success rate. Keywords : Nanoscale crossbar; VLSI; Manufacturing defects; Defect tolerance. 1. Introduction In recent years, there has been a huge push towards the reduction of size of electronic devices and integration of more number of functions into them, which leads to the development of various nano-technology based fabrication methods as an alternative of conventional lithography based methods. As the sizes of well-known lithography-based VLSI devices are lessening from submicron to further below, nanoscale fabrication is expected to alter the conventional methods. Crossbar architecture has become the most positive computational architecture. A nanoscale crossbar consists of two parallel planes of nanowire arrays separated by a thin layer of a chemical substance with certain electrochemical properties[2,3]. The junction of wires, called cross-points, can become defective due to various reasons. Generally, cross-points have three basic types of manufacturing defects[6]. Firstly, stuck-open[5,6] fault acts like an open switch. A function can be assigned to a nanowire containing such a defect by avoiding this junction only. Secondly, stuck-closed fault[6] can be considered as a shorted switch. A function can be assigned to a nanowire with a stuck-closed defect only if the function has some on input at that junction of crossbar. Besides these, bridging faults may exist in the crossbar. To make these defective crossbars profitable, we have to use them by implementing a defect tolerant approach. Various techniques for mapping different functions in the defective nanoscale crossbar circuits have been elaborated in the recent past[3,4,6]. Paper[1] shows a greedy algorithm to map functions, whereas paper[2] presents a technique based on graph matching algorithm. In this paper, we have proposed an efficient function mapping technique where functions are generated randomly along with varying defect percentage and crossbar dimensions. Rest of the paper is organized as follows: Section 2 describes the proposed function mapping method, section 3 shows the experimental results followed by conclusion and references in section Proposed Function Mapping Method A crossbar matrix [6] (CB) of dimension cd cd is generated randomly, where every value represents the status of junction of the crossbar. 0, 1 and 2 represents open-switch defect, configurable junction and closed-switch defect respectively. For a set of functions, the distinct product terms are calculated by using the variables as input to a crossbar configured to work as AND logic. The distinct product terms are then used as input to an OR-configured crossbar to realize the respective functions. Assume that the set of functions F, expressed as sum-of-products, are as follows: f1 = ab + cd, f2 = ab + bc, f3 = bc + abc, f4 = ac + bc, f5 = abc + cd

21 6 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Mapping the Distinct Product Terms on AND-Plane Crossbar The distinct product terms of the set of functions F are p1=a.b, p2=c.d, p3=b.c, p4=a.b.c, p5=a.c. The Function Matrix (FM) generated to represent the distinct product terms along with a randomly generated Crossbar Matrix of size 5 5 with 40% defect is shown in Fig 1. Fig 4(a): Function Matrix Fig 4(b): Crossbar Matrix Fig 1(a): Function Matrix Fig 1(b): Crossbar Matrix A Bipartite Graph (BG) matrix generated by the bipartite graph function will represent every possible mapping of the product terms to the crossbar wires. The matrix has product terms along columns and wires along the rows. If a product term can be implemented in a wire, the number of variables used in that product term will be assigned to the corresponding position of the matrix. Fig 2 shows the generated BG matrix and the visual representation of the Bipartite Graph. The One-To-One Matrix is generated from the BG Matrix using the proposed algorithm to map the distinct product terms to crossbar wires. The matrix and its graphical representation are shown in Fig 3. A Bipartite Graph (BG) matrix generated by the Bipartite_graph function will represent every possible mapping of the functions to the crossbar wires. If a function can be implemented in a wire, the number of product terms used in that function will be assigned to the corresponding position of the matrix. Fig 5 shows the generated BG matrix and the corresponding Bipartite Graph. The One-To-One Matrix is generated from the BG Matrix using the proposed algorithm to map the functions to crossbar wires. The matrix and its graphical representation are shown in Fig 6. Fig 5(a): BG Matrix Fig 5(b): Bipartite Graph Fig 6(a): One-To-One Matrix Fig 6(b): One-To-One Graph 2.2 Mapping the Functions using OR-Plane Crossbar The Functions, expressed as the sum of the Distinct Product Terms, can be re-written as f1=p1+p2, f2=p1+p3, f3=p3+p4, f4=p5+p3, f5=p4+p2. The Function Matrix and a randomly generated Crossbar Matrix of size 5 5 with 40% defect is shown in Fig Algorithm Bipartite_graph (CB, FM): Input: matrix CB of size (cd x cd) and FM of size (m x n) Output: one-to-many matrix BG of size (cd x m) 1. Repeat 2 to 3 for i from 1 to cd 2. Repeat 3 for j from 1 to m 3. Repeat for k from 1 to n 3. a. If (CBik=0 and FMjk=1) or (CBik=2 and FMjk=0) then BGij=0 and go to step 2 3. b. Else if (CBik=1 and FMjk=1) or (CBik=2 and FMij=1) then increment BGij by one 4. Return BG One_to_one (BG): Input: Bipartite graph (BG) of size (cd x m) Output: One to one matrix OTO of size (cd x m) that has functions in columns and wires in rows 1. Repeat 2 to 5 until no more function can be implemented 2. Select the non-allocated functions with minimum degree that is implemented in BG 3. Select the unused wires with minimum degree that has at-least one function implemented in BG 4. Select functions from step 2 having maximum number of variables

22 8 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 9 5. Allocate 1 in OTO matrix for as many functions possible using first come first serve rule, provided not more than one allocation is established in a given row or column Function_mapping 1. Input set of functions 2. Break the functions into distinct product terms 3. Treat each product terms as a function of some variable and implement them in a crossbar using Bipartite_graph and One_to_one functions. 3. Experimental Results For experiment, we considered 100 randomly generated functions. The functions were mapped into randomly generated crossbar matrices of dimension varying from 100x100 to 1000x1000, with defect percentage varying from 5% to 30%. The average mapping success rate was plotted in the graph as shown in Fig % mapping rate was never achieved since a few numbers of functions could not be implemented due to the hardware limitation caused by the position of the randomly generated defects. 4. J. S. Yang, R. Datta, Efficient Function Mapping in Nanoscale Crossbar Architecture, IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems, pp , October M. Kule, H. Rahaman, B. B. Bhattacharya, On Finding a Defect Free Component in Nanoscale Crossbar Circuits, 4th ICECCS 2015, Procedia Computer Science 70, , Y. Su, W. Rao, Defect-Tolerant Logic Mapping on Nanoscale Crossbar Architectures and Yield Analysis, 24th IEEE Int. Symp. on Defect and Fault-Tolerance in VLSI Systems, pp , Table 1: Mapping success rate for variable crossbar dimension and variable defect percentage Crossbar Size Fig 7: Success Rate of function mapping in variable crossbar dimension with variable defect percentage 4. Conclusions Here, we have successfully realized some randomly generated functions that are in the form of sum of products or product of sums, in two steps. Firstly, distinct product terms are mapped in an AND-plane crossbar, and then functions are realized in another OR-plane crossbar by taking the outputs of the AND-plane crossbar. The overall process takes O(n3) time for completion. References 1. H. Naeimi, A. DeHon, A Greedy Algorithm for Tolerating Defective Cross points in NanoPLA Design, IEEE International Conference on Field-Programmable Technology (FPT 2004), Tad Hogg, Greg Snider, Defect-tolerant Logic with Nanoscale Crossbar Circuits, HP Labs, Palo Alto, CA, May 25, M. Kule, H. Rahaman, Defect Tolerant Approach for Function Mapping in Nano Crossbar Using Evolutionary Algorithms, 3rd Int. Conf. on Microelectronics, Circuits and Systems, Micro 2016.

23 10 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 11 Selection of Optimum Power Plant Using Multi Criteria Decision Making (MCDM) Tool Abstract Ritu 1, Parth Raval 2, Shabbiruddin 3, Department of Electrical & Electronics Engg. Sikkim Manipal Institute of Technology. Sikkim. India. 1 ritz17995@gmail.com, 2 parthraval31@yahoo.com, 3 shabbiruddin85@yahoo.com This paper presents a model using MCDM technique to analyze the methods used to Generate Electricity. The different types of power generating plants are compared based on number of technical and non-technical criteria. The selection of type of power plants depends upon many factors such as costs involved in construction and maintenance of power plant, availability of land etc. The analysis is done by using an MCDM approach- Analytical Hierarchy Process (AHP) method (taking twelve basic attributes involving initial cost, running cost, location, space required, overall efficiency, stand by losses, cost of fuel, clean emissions, limit of source of power maintenance cost, transmission & distribution cost and starting time) for selecting the top power generating plant out of the four major existing power generation sources namely Thermal Power plant, Diesel Power plant, Hydroelectric Power plant and Nuclear Power plant. As in this study, the intention is to optimize the generating plants based on different criteria. The verdict will help electrical engineers specially power department people to be able to get multi aspects of decision making and develop the locations, designs and reconstruction of the generating plants. Key words: Power generating plants, Factors influencing Generating plants, Analytical Hierarchy Process (AHP) 1. Introduction Basic necessity for the economic development of a country is energy. The standard of living in a country is affected by the electrical power and its use. With the advancement in the industry the demand for electrical power is increasing. To meet this demand various power plants have to be developed. In this paper, a comparison of power generating plants is done using one of the MCDM approach Analytical Hierarchy Process (AHP). The comparisons are proposed to identify areas where the potential for performance improvement is higher, and trend which might aid in the design of future generating plants. The available literature consists of work of only few researchers. Some of the comparisons of power generating plants are available in literature. The coal-fired Nanticoke Generating Plant (NGS) and the Pickering Nuclear Generating Plant (PNGS) are selected as the representative plants on which the comparisons are done [1,2,3]. Both of these plants are located in Ontario, Canada and are operated by the provincial electrical utility, Ontario Power Generation (formerly Ontario Hydroelectric). These plants are selected because the individual units in each plant have similar net outputs (approximately 500 MW). Also the substantial base of operating data has been obtained for them over several years (NGS has been operating since 1981, and PNGS since 1971). They are representative of present technology and they operate in similar physical environments. Energy and energy analyses [4 8] are used to perform thermodynamic performance comparisons. The researchers [4 8] propose that the thermodynamic performance of a process is best evaluated with energy analysis. Energy is the work which can be produced by a stream or system as it is brought into equilibrium by a reference environment, and can be considered as a measure of the quality (or usefulness) of energy, work having the highest quality. Energy is consumed during the real process, and conserved during ideal processes. The energy consumption during a process is proportional to the entropy created due to process irreversibility. Applications of energy analysis have increased in recent years, and have included investigations of coal fired electricity generation using conventional methods [9-13], fluidized-bed combustion [14,15] and combined-cycle [16-18] systems, as well as cogeneration [19-21]. 2. Problem Definition Generally limited alternatives are available in generating system planning. However some alternatives concerning the number and size of units, auxiliaries and steam pressure and temperature are usually available, especially for captive plants, and are to be compared for maximum benefits. Four types of major electric generating power plants are known: Thermal Power plant (TP), Hydroelectric Power plant (HP), Diesel Power plant (DP) and Nuclear Power plant (NP). There are other types of power plants used around the world besides the big four are covered here. Some are not covered here due to the fact that they are strictly dependant on the geography of the area where they can be erected: Geo-thermal, Wind, Solar and Tidal. Therefore the four widely used power plants (steam, Hydroelectric, diesel and nuclear) has to be analyzed with the following aspects: Location (L) & space required (SR), Initial cost (IC) & running cost (RC), Overall efficiency (OE) & stand-by losses (SL), Maintenance costs (MC), Limit of source of power (LSP), Cost of Fuel (COF) & Clean emissions (CE), Starting time (ST), Transmission & distribution costs (TDC). 3. Proposed methodology Analytical Hierarchy process (AHP) This methodology has been used in so many areas. For the multi-faceted problems this method is used because of its rigors and systematic approach. A methodology of the analytic hierarchy process is as follows: 1. The decision problem and its object are determined. 2. The decision criteria are defined in the form of a hierarchy of objectives. To enable comparison a matrix of size (n*n) is formed as shown in figure. 3. Taking into account the opinion of respective experts in the field make pair wise comparisons. 4. The geometric means (G.M) of each row are calculated and the normalized priority vectors (P.V) appointed. 5. The principal Eigen value is found by the formula: Where, Cj is the sum of each column vector. 6. The constancy index is found by the formula: Where n= size of the matrix. The judgment consistency is tested. The consistency ratio (CR)= ratio of CI to average random consistency. If, CR<0.10 then the matrix is consistent else it is revised. The same calculation can also be done on a computer package for this method.

24 12 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Solution Methodology Overall Priority Overall priority recommendation= Hydroelectric power plant. 4. Conclusion The following information should a project engineer have before initiating any project: estimation of probable load, location of the loads and future load conditions, especially in case of hydroelectric generating plants because the transmission cost also needs to be taken into consideration. Multi-criteria decision making (MCDM) represents an interest area of research since most real-life problems have a set of conflict objectives. In this paper we have used Analytical Hierarchy Process (AHP) technique to rank the various means of power generation taking the criteria such as initial cost, location etc. as the factors. Considering to important criteria we arranged possible options to get better result in summation, then depending on the different factors influencing the power generation, the best suitable generation unit is selected. With the overall priority result Hydroelectric power plant appears to be the most efficient power-generating unit. The methodology developed and the results achieved can be studied together for the final selection and location of power plant. References DECISION ALTRNATIVE Fig 1: Hierarchy developed for the problem For the comparison, sort able lists are used with the following key of merit: 1 Extremely Preferred 2 Most preferred 3 Medium preferred 4 Less preferred 1. Ontario Hydroelectric, Nanticoke generating plant technical data, document, August Ontario Hydroelectric, Pickering generation plant data: units 1 4, Data Sheet No. 966 A119:0, Scarrow R.B., Wright W., Commissioning and early operation of Nanticoke generating plant, Paper presented to Thermal and Nuclear Power Section at Canadian Electrical Association Fall Meeting, Halifax, N.S., Moran M.J., Sciubba E., Energy analysis: principles and practice, J. Engrg. Gas Turbines Power 116 (1994) Gaggioli R.A., Petit P.J., Use the second law first, Chemtech 7 (1977) Rodriguez L.S.J., Calculation of available-energy quantities, in: Gaggioli R.A. (Ed.), Thermodynamics: Second Law Analysis, ACS Symposium Series, Vol. 122, Amer. Chem. Soc., Washington, DC, 1980, pp Gaggioli R.A., Available energy and energy, Internat. J. Appl. Therm. 1 (1998) Moran M.J., Shapiro H.N., Fundamentals of Engineering Thermodynamics, 4th edn., Wiley, New York, McIlvried H.G., Ramezan M., Enick R.M., Venkatasubramanian S., Energy and pinch analysis of an advanced ammoniawater coal-fired power cycle, in: Proc. ASME Advanced Energy Systems Division, AES, Vol. 38, 1998, pp Yasni E., Carrington C.G., The role for energy auditing in a thermal power plant, in: HTD, Vol. 80, Amer. Soc. Mech. Engineers, New York, 1987, pp Habib M.A., Zubair S.M., Second-law based thermodynamic analysis of regenerative-reheat Rankine-cycle power plants, Energy Internat. J. 17 (1992) Habib M.A., Said S.A.M., Al-Bagawi J.J., Thermodynamic performance analysis of the Ghazlan power plant, Energy Internat. J. 20 (1995) Habib M.A., Said S.A.M., Al-Zaharna I., Optimization of reheat pressures in thermal power plants, Energy Internat. J. 20 (1995) M.A. Rosen / Energy Int. J. 1(3) (2001)

25 14 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Rosen M.A., Horazak D.A., Energy and energy analyses of PFBC power plants, in: Alvarez Cuenca M., Anthony E.J. (Eds.), Pressurized Fluidized Bed Combustion, Chapman and Hall, London, 1995, pp , Chapter Grimaldi C.N., Bidini G., Using energy analyses on circulating fluidized bed combustors, in: A Future For Energy: Proc. Florence World Energy Research Symp., Firenze, Italy, 1990, pp Tawfik T., Tsatsaronis G., Price D., Exergetic comparison of various IGCC power plant designs, in: Proc. Internat. Conf. Energy Systems and Ecology, Cracow, Poland, 1993, pp Kim D.J., Jeon J.S., Kwak H.Y., Exergetic and thermoeconomic analyses of a combined cycle power plant, in: Proc. ASME Advanced Energy Systems Division, AES, Vol. 39, 1999, pp Jin H., Ishida M., Kobayashi M., Nunokawa M., Energy evaluation of two current advanced power plants: supercritical steam turbine and combined cycle, J. Energy Resources Technology 119 (1997) Habib M.A., First- and second-law analysis of steam-turbine cogeneration systems, J. Engrg. Gas Turbines Power 116 (1994) Rosen M.A., Comparison based on energy and energy analyses of the potential cogeneration efficiencies for fuel cells and other electricity generation devices, Internat. J. Hydroelectricgen Energy 15 (1990) Rosen M.A., Energy utilization efficiency in a macrosystem (Ontario): evaluation and improvement through cogeneration, in: Proc. Internat. Symp. CO2 Fixation and Efficient Utilization of Energy, Tokyo, 1993, pp MCX Crude Oil Price Trend Forecasting Using Naive Bayes Classifier Amit Gupta #1, Subrata Kumar Mandal #2, Animesh Hazra #3, Md. Rasid Ali *4, Pritesh Ranjan *5, Suparna Podder *6 Abstract #1,2,3 Faculty, Jalpaiguri Govt. Engg. College, Jalpaiguri, West Bengal, India 1 amitgupta4in@gmail.com, 2 mandal.skm@gmail.com, 3 hazraanimesh53@gmail.com *4,5,6 Student, Jalpaiguri Govt. Engg. College, Jalpaiguri, West Bengal, India 4 mdrasid12096@gmail.com, 5 pranjan341@gmail.com, 6 poddersuparna1432@gmail.com Crude oil trend forecasting is a challenging task due to its complex nonlinear and chaotic behaviour. During the last couple of decades, both researchers and professional traders devoted proactive knowledge to address this issue. This paper proposes a new method for crude oil weekly trend forecasting based on Naive Bayes Classifier (NBC) and extends this particular branch of recent works by considering a number of leading features as inputs to test the forecasting performance of MCX crude oil price trend covering the period 2nd January 2012 through 30thDecember2016. Experiential results show that the proposed method is efficient and warrant further research in this domain. Keywords : MCX Crude Oil Price Trend, Prediction Accuracy, Classification. 1. Introduction Crude oil is a mineral oil consisting of a mixture of naturally occurring hydrocarbons and associated impurities like sulphur. Under normal surface temperatures and pressure it exists in liquid form. Its physical characteristics (for example, density) are highly variable. In 2015, the International Energy Agency (IEA) Oil Market Report for 2016 forecasted global average demand of nearly 96 million barrels of oil and liquid fuels per day that worked out to more than 35 billion barrels a year. Production exceeded 97 million barrels per day (mb/d) in late 2015, and Medium-Term Oil Market Report 2016 predicted the demand crossing the 100 mb/d threshold towards the end of its five-year outlook period. [1] Most countries depend on imported crude oil in order to meet their energy needs. The countries exporting oil attempt to use oil as a tool to perpetuate and manage political and economic power. Changes in world crude oil prices are becoming an intensifying source of concern to the economic and organizational decisions made by the government. Knowing that every economic region in the world is dependent on crude oil, hence any rise or fall in the price of crude oil has an immediate effect on the global economy [2]. Over past decades, a model driven by demand is being followed by most of the enterprises. Such model is a complex model and requires all parties good insight into real time consumption and emerging buying patterns, also provides companies more chances to share information and collaborate with others in the supply chain. The main relationship between supply and demand is with either increasing demand or supply or production cut the price trend will elevate, and if the demand for a commodity in market decreases or supply increases i.e., product is easily available in the market then price automatically follows downward trend. The world oil demand and supply chart from 2013 to 2016 is shown below in Figure 1.

26 16 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 17 Table 1: SUMMARY DESCRIPTION OF CRUDE OIL DATA SETS 2. Background Work We talked about the importance of crude oil and their effects on the economy. It has engrossed the attention of researchers and professional traders to study it from different characteristics and different categories. In the academic literature, prediction of oil prices was conferred in different ways. These are discussed below. For example Xie et al. [3] categorized the literature as: Fig 1: World Oil Demand and World Oil Supply Chart a) Crude Oil prices are restricted between demand and supply framework. b) Crude Oil price volatility analysis c) Crude Oil price forecasting. Pan et al. [4] analyzed as three parts: a) Future as predictor of oil price b) Economic models for explaining the prediction c) Intelligent computing models to predict oil price Abramson and Finizza [5-7] used Belief Networks (BNs) to forecast crude oil price. More recently, they used rather a probabilistic belief network model to address the given question [8]. Wang et al. [9] proposed a new integrated methodology-tei@i methodology and showed a good performance in forecasting crude oil price with the help of back-propagation neural network (BPNN) as the integrated technique. BPNN(back-propagation neural network), a class of the most popular neural network model, can in principle model some relations which are nonlinear but they do not lead to one global or unique solution due to differences in their initial weight set. 3. Data Set In this part of the study, we selected seven crude oil data sets from the EIA [10] and one MCX crude oil data set from MCX India [11]. Table 1 summarizes the properties of the selected data sets. In first seven data sets, there are five attributes namely release date, time, weekly actual crude oil production in million barrels, weekly previous crude oil production in million barrels and no. of instances. In the eighth data set, there are seven attributes namely release date, weekly volume, weekly open price, weekly high price, weekly low price, weekly close price and no. of instances. All the attributes are collected in order of the release date. 4. Concepts Used in MCX Crude Oil Price Trend Prediction 4.1 Pearson Correlation Coefficient (PCC) It is a feature selection procedure which is employed to measure the strength of a linear connection between two variables, where the value of correlation coefficient r = 1 implies a perfect positive correlation and the value r = -1 implies a perfect negative correlation. Correlation between sets of data is a measure of how perfect they are associated to each other. The most common quantification of correlation in Statistics is the Pearson Correlation Coefficient. The coefficient value lies between -1 and Naïve Bayes Classifier (NBC) The Naïve Bayes classifier is based along the mathematical principle of conditional probability. If n attributes are given, independent assumptions made by the NBC is 2n!. A Conditional probability model for NBC is given as: P ( Ci X ) where, Ci is the ith class and x is input vector. Here class variable C is conditional on several attributes x = x1,.., x n. The classifier works on the simple Naïve Bayes formula shown in Equation Proposed Methodology In this research paper Pearson Correlation Coefficient is used for feature selection and Naïve Bayes classifier is used to forecast the trend of crude oil price on a weekly basis. Here 80 percent of data are used for training and 20 percent of data is used for testing.

27 18 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 19 From the first seven data sets mentioned in Table 1, difference of Actual and Previous is calculated to obtain seven independent attributes. Thereafter, PCC is applied on these independent attributes and eventually the independent attributes are brought down to five. After careful observation on the monetary value involved in commodity buying and selling caused due to brokerage, service tax, stamp duty, commodity transaction tax, and etc., we have chosen a threshold value of 0.05 percent. From eighth data set in Table 1, difference of the Close and Open is calculated. The dependent attribute is represented as up-trend 1 if the difference percentage is larger than 0.05 percent, else it is represented as down-trend 0. Fig 2: Workflow diagram for MCX crude oil price trend prediction. 6. Result and Discussion In this study, the Naïve Bayes classifier is used to predict the price trend of crude oil. It has the classification accuracy of 84.6%. The level of effectiveness of the classification model is calculated by using confusion matrix. Here confusion matrix is represented as a table that is often applied to describe the performance of a classifier or classification model on a collection of test data for which the true values are known. The level of effectiveness of the classification model is calculated with the number of incorrect and correct classification in each possible value of the variable being classified in the confusion matrix. In this paper a testing dataset is taken where there are 31crude oil up-trend data out of which 26 have been predicted correctly and only 5 are predicted wrongly. Also, there are 21crude oil down-trend data of which 3 are predicted wrongly and 18 are predicted correctly using Naïve Bayes classifier. This analysis is shown in Table 2. Table 2: Confusion Matrix Of The Dataset Using Naïve Bayes Classifier References Khashman, Adnan, and Nnamdi I. Nwulu. Intelligent prediction of crude oil price using Support Vector Machines. Applied Machine Intelligence and Informatics (SAMI), 2011 IEEE 9th International Symposium on. IEEE, Xie, W, Yu, L., Xu, S., & Wang, S. A new method for crude oil price forecasting based on support vector machines. In Computational Science ICCS, Springer Berlin Heidelberg,pp , Pan H, Haidar I, & Kulkarni S. Daily prediction of short-term trends of crude oil prices using neural networks exploiting multimarket dynamics. Frontiers of Computer Science in China 3(2), pp , Abramson, Bruce, and Anthony Finizza. Using belief networks to forecast oil prices. International Journal of Forecasting 7.3 (1991): Abramson B, and A. J. Finizza. A belief network implementation of target capacity utilization. Proceedings of the 13th North American Conference of the International Association for Energy Economics Abramson B, and A. J. Finizza. A Belief NetworkBased System that Forecasts the Oil Market by Constructing Producer Behavior. Proceedings of the 15th North American Conference of the International Association for Energy Economics Abramson, Bruce, and Anthony Finizza. Probabilistic forecasts from probabilistic models: a case study in the oil market. International Journal of forecasting 11.1 (1995): Wang, S.Y, L.A. Yu, K.K.Lai: Crude oil price forecasting with TEI@I methodology. Journal of Systems Science and Complexity 18 (2005) The subject area of any commodity can be analyzed fundamentally, technically or by both. We have studied the technical analysis of crude oil. In this paper, we used PCC algorithm for feature i.e. attribute selection and thereafter tested the performance of the Naïve Bayes classifier for the prediction of crude oil price trend on a weekly basis. 7. Conclusion In essence, oil prices are amongst key economic variables. High inflation may discourage supply of goods in general, as value of money is falling precipitately. If oil supply turns regressive, economic growth would be obstructed and pressure on oil price will accelerate. Restoring stable oil markets is essential for lasting economic growth and price stability. In this paper, the experiential results show that the proposed method is efficient and warrant further research in this domain considering technical as well as fundamental factors.

28 20 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 21 Improving the Prediction Accuracy of Diabetes Diagnosis using Naïve Bayes, Binary Logistic Regression and k-nearest Neighbours Classifiers Animesh Hazra #1, Subrata Kumar Mandal #2, Amit Gupta #3, Arindam Ghosh *4, Abhisek Hazra *5, Abhisek Sutradhar *6 # 1,2,3 Faculty, Jalpaiguri Govt. Engg. College, Jalpaiguri, West Bengal, India 1 hazraanimesh53@gmail.com, 2 mandal.skm@gmail.com, 3 amitgupta4in@gmail.com *4,5,6 Student, Jalpaiguri Govt. Engg. College, Jalpaiguri, West Bengal, India 4 arindamghosh094@gmail.com, 5 abhisekbubuhazra@gmail.com, 6 abhiseksutradhar24@gmail.com Abstract Data mining method helps to diagnose patient s diseases. Diabetes Mellitus is a chronic disease that affect various organs of the human body. Early prediction can save human life and can take control over the diseases. This paper explores the prior prediction of diabetes using various data mining techniques. The dataset has 768 instances taken from PIMA Indian Dataset to determine the accuracy of the data mining techniques in prediction. Then a comparative study on different Diabetes Mellitus classification approaches viz. Naïve Bayes, Binary Logistic Regression and k-nearest Neighbours classifiers are conducted. Here, Binary Logistic Regression classifier is concluded as the best classifier compared to the other two classifiers. Keywords : Data Mining, Diabetes, Prediction Accuracy, Classification 1. Introduction Today the buzz word is Health Care all over the world. Prior prediction of diseases can reduce the fatality rate of human. Presently, there are huge amount of data available in hospitals and medical related institutions. Information Technology plays an important role in Health Care. Diabetes is a lifelong disease with the potential to cause a worldwide Health Care crisis. According to International Diabetes Federation, 382 million people are suffering from diabetes worldwide. By 2035, this will be doubled as 592 million. Prior prediction of diabetes is an exceptionally challenging task for medical practitioners due to complex interdependence on numerous factors. Diabetes affects the functioning of human organs such as kidney, eye, heart, nerves, foot etc. Data mining is a method to extract useful information from large database. It is a multidisciplinary field of computer science which consists of computational process, machine learning, statistical techniques, classification, clustering and discovering patterns. 1.1 Causes of Diabetes Hereditary and genetics factors, Insulin deficiency, Insulin resistance, Stress, Obesity, Increased cholesterol level, High carbohydrate diet, Nutritional deficiency, Excess intake of oil and sugar, Infections caused by viruses, No physical exercise, Overeating, Tension, Worries and High blood pressure. 1.2 Types of Diabetes Type 1: Here the pancreas is unable to produce the required amount of insulin and hence the glucose level in blood is above the normal range. People suffering from this type are usually dependent on external insulin. Type 2: Here the cells of the body fail to use the insulin produced because of resistance to insulin. Gestational Diabetes occurs when pregnant women without a prior diabetes background develop a high level of blood glucose. In the majority of cases, such patients can control their diabetes with the help of diet and exercise. About 10% to 20% of them require taking some kind of blood-glucose-controlling medications. In few cases, this type of diabetes leads to type 2 diabetes in future. It affects on about 4% of all pregnant women. Congenital Diabetes occurs in humans because of genetic defects of insulin secretion, cystic fibrosis-related diabetes, and high doses of glucocorticoids lead to steroid diabetes. 1.3 Basic Concepts used in Diabetes Mellitus Detection Principal Component Analysis (PCA) It is a feature extraction technique which takes an orthogonal transformation to convert a set of instances of possibly correlated parameters into a set of values of linearly uncorrelated parameters which are called principal components Naïve Bayes Classifier (NBC) A naïve Bayes classifier simply assumes that the value of a particular feature is independent of the presence or absence of any other feature, given the class variable. For example, we may consider a fruit to be an apple if it is red, round and about 3 inches in diameter. Naïve Bayes classifier assumes each of these features to contribute independently to the probability that this fruit is an apple, irrespective of the presence or absence of the other features Binary Logistic Regression Classifier (BLRC) Binary Logistic Regression classifier predictive analysis helps in health care and it is done predominantly to determine which patients are at risk of developing certain conditions, for example, diabetes, asthma, heart disease and other chronic illnesses. Additionally, sophisticated clinical decision support systems integrate predictive analytics with support medical decision making at the point of care. Logistic regression is a simplification of linear regression. It is used primarily for predicting binary or multi-class dependent variables k-nearest Neighbours Classifier( k-nnc) In case of pattern recognition, the k-nearest Neighbours algorithm(or k-nn for short) is a non-parametric method for classification and regression. In both cases, the input contains the k closest training examples in the feature space. The output is dependent on whether k-nn is used for regression or classification: In k-nn classification, the output ought to be a class membership. An object is classified by a majority of votes of its neighbours.the object is assigned to the class, most common among its k nearest neighbours (k is typically a small positive integer). When k = 1, the object is simply assigned to the class of that single nearest neighbour. The output is the property value for the object in k-nn regression. This value is the average of all the values of its k nearest neighbours. 1.4 Data Source To evaluate these data mining classification techniques, PIMA Indian Dataset is used. The dataset has 768 instances and 9 attributes. Attributes are exacting, all patients now are females at least 21 years old of PIMA Indian heritage [6]. 2. Background Work A Research paper presented by Yang Guo,GuohuaBai, Yan Hu School of computing Blekinge Institute of Technology Karlskrona, Sweden[1]. The discovery of knowledge from medical databases is important for making an effective medical diagnosis. The dataset used was the Pima Indian diabetes dataset. Preprocessing was used to improve the quality of data. Classifier was applied to the modified dataset for constructing the Naïve Bayes model. Finally,WEKA was used for doing simulation, and the accuracy of the resulting model was 72.3%. In the paper [2] by Murat Koklu and Yavuz Unal, diabetes mellitus prediction was done using multilayer perceptron, J48 and Navie Bayes Model.The testing diagnosing accuracy of the Navie Bayes classifier was about 76.3% by using WEKA tool in compliance with the performance of other well-known machine learning techniques. In the paper [3] by K.R. Lakshmi and S. Prem kumar utilization of data mining techniques for prediction of diabetes disease survivability was done using C4.5, SVM, k-nn, PNN, BLR, MLR, PLS-DA, PLS-LDA, k-means & Apriori algorithm.

29 22 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 23 The highest classification accuracy of the PLS-LDA classifier was about 76.78% by using Tanagara tool compared with the performance of other classification methods. In the paper [4] by Ashwin kumar U.M and Dr. Ananda kumar K.R, diabetes mellitus prediction was done using decision tree and incremental learning model. The highest classification accuracy of the C4.5 classifier was about 68.00% by using WEKA tool compared with the performance of other classification methods. 4. Result and Discussion In this paper we have conducted a comprehensive study on different classification techniques and provided a basis for comparison among them in terms of accuracy percentage. It is shown in TABLE I, which reveals that the Binary Logistic Regression classifier is the best classifier with classification accuracy of 80.50% compared to other two classifiers. The level of effectiveness of the classification model is calculated by using confusion matrix. In the paper [5] by Manaswini Pradhan and Dr. Ranjit Kumar Sahu, they used the diabetes disease prediction using artificial neural network and genetic algorithm. The best classification accuracy of the artificial neural network classifier was about % by using Tanagara tool compared with the performance of genetic algorithm model classifier. 3. Proposed Methodology Today s real-world databases are highly vulnerable to noisy, missing and inconsistent data due to their typically massive size and their likely origin from multiple, miscellaneous sources. Hence data preprocessing is a necessary phase for classification purposes. Data preprocessing includes data cleaning, data dimensionality reduction, data transformation (data normalization) followed by classification. Here, we have taken PIMA dataset from UCI machine learning repository as an input data. This dataset contains 768 instances and 9 attributes of which we have taken 576 instances for training purpose and 192 instances for testing purpose. These testing data are applied over three classification methods viz. Naïve Bayes, Binary Logistic Regression and k- Nearest Neighbours to detect whether the patient has any diabetic history or not i.e. tested result is positive or negative. Our data cleaning technique includes removing the missing values if present, with the mean of the attributes. In the following workflow diagram, we represent prediction of diabetes mellitus using PCA as a feature selection technique. Here confusion matrix is used which is a table that is often applied to describe the performance of a classifier or classification model on a collection of test data for which the true values are known. The level of effectiveness of the classification model is calculated with the number of incorrect and correct classification in each possible value of the variable being classified in the confusion matrix. In this paper a testing dataset is taken where there are 135 diabetes positive data out of which 109 have been predicted correctly and only 26 are predicted wrongly. Also, there are 57 diabetes negative data of which 11 are predicted wrongly and 46 are predicted correctly using Binary Logistic Regression classifier. This analysis is shown in Table II. Fig 1: Workflow diagram for diabetes mellitus detection using Principal Component Analysis. 5. Conclusion The early detection of diabetes mellitus is needed in reducing life losses. This early diabetes detection can be predicted with the help of modern machine learning techniques. In this paper data cleaning, feature selection, data discretization and classification techniques have been applied for predicting diabetes as accurately as possible. This project reveals that Binary Logistic Regression classifier gives the maximum accuracy of 80.50% using only five dominant features. It is a challenging task in machine learning and data mining areas to construct specific and computationally efficient classifiers for medical applications. With the help of machine learning methods it is really difficult to diagnose the different medical conditions of a diabetes patient and prediction of conditions are also more critical in nature. For big datasets how these classification algorithms behave, that is one of the future scopes of this project.

30 24 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 25 References 1. Yang Guo, Guohua Bai, Yan Hu School of computing Blekinge Institute of Technology Karlskrona, Sweden, Using Bayes Network for Prediction of Type-2 Diabetes. 2. Murat Koklu and Yauz Unal, Analysis of apopulation of Diabetic patients Databases with Classifiers, International Journal of Medical,Health,Pharmaceutical and Biomedical Engineering, vol.7 No.8, K.R. Lakshmi, S. Prem kumar, Utilization of Data mining Techniques for prediction of Diabetes Disease survivability, International Journal of Scientific & Engineering Research, vol.4 Issue 6, June Ashwin kumar U.M. and Dr. Ananda kumar K.R., Predicting Early Detection of cardiac and Diabetes symptoms using Data mining techniques, International conference on computer Design and Engineering, vol.49, Manaswini pradhan, Dr. Ranjit kumar sahu, Predict the onset of diabetes disease using Artificial Neural Network, International Journal of Computer Science & Emerging Technologies, vol.2 Issue 2, April Abstract A Decision Making Model through Cloud Computing Pinakshi De 1 Tuli Bakshi 2 1 Assistant Professor, The Calcutta Anglo Gujarati College 2 Assistant Professor, Calcutta Institute of Technology Kolkata, India Cloud computing model is a part of IT infrastructure which provides different resources to users on demand and pay-perusage basis. Users can access networks; storage, services and applications without physically access them. In rapid changing environment, cloud services are also growing rapidly with new to newer functionalities. Therefore, main thing is to select best cloud service provider to achieve all business criteria as well as goals at minimum cost. Similarly, for Cloud Service Providers (CSP)[5] dynamic resource allocation is also the key issue. So, a proper decision making model [11] is required to select Service vendor and dynamic resource allocation. Fuzzy analytic hierarchy process (FAHP)[3] methodology is used to develop mathematical model for cloud service provider selection at minimum cost and dynamic resource allocation. An illustrative example has been provided for cloud computing computations. Keywords : Cloud computing, FAHP, decision models 1. Introduction Cloud computing[8] is a type of internet based computing that refers to applications and services that run on a distributed network using shared pool resources on demand service [10] and governed by standard internet protocols. In this computing the resources are virtualized and limitless. A cloud is defined as the combination of the infrastructure of a data centre with the ability to provision of hardware and software. It enables organizations to focus on main part of their businesses instead of spending time and money on computer infrastructure. The main objectives of cloud computing is to pooling physical resources and representing them as a virtual resource. Nowadays a most expected computing model is cloud computing where resources are available on demand basis. Dynamic allocation of resources [6] are most challenging issues in cloud computing. When several users make requests for resources in cloud environment, then a mathematical decision model is required to determine which resource request will be granted and to whom also. Similarly, for multi-source scenario of cloud computing services, a decision model is also required for user to choose best cloud provider in respect of Reliability [9], Bandwidth, Completion Time, Cost [12], Scalability and Flexibility. So, in cloud computing resource allocation and selection of cloud provider becomes an important key issue. So, a multi criteria decision making model is required for that on depends upon different parameter and attributes of applications. Cloud computing has the following characteristics--- a) Service as per requirement: As per requirement or demand, a consumer can provision computing resources via automated mechanism. b) Resource Pooling: Different physical and virtual resources are pooled by cloud providers and dynamically assigned as well as reassigned to the multiple consumers according to their demands. c) Wide Network Access: Resources are available over the network and accessed through standard internet protocol. d) Scalability: Cloud computing capabilities i.e. resources are elastically provisioned. In some cases, quickly accessed to scale out and quickly released to scale in.

31 26 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 27 e) Pay-as-per-use: The cloud service providers measures and charges for usage of CPU, memory, network bandwidth and other resources as per consumer requirement. Depends on the types of cloud, size and services provided by cloud computing, this computing model is categorized into two types-[6] a) Deployment models- define location and management of the clouds infrastructure. b) Service models- define particular types of services that are provided by cloud computing model. 2. Model Description Resource allocation problem is more challenging in case of cloud computing to maintain Quality of Service (QoS). During resource allocation across the cloud based network, parallel computing problem and cost of each computational service should be considered. Cloud computing support cloud consumers to access large amounts of data as well as resources in order to grant and release as per computational demand. So, in this case for resource allocation scheduling algorithms is required. But most of the scheduling algorithm have a common drawback that it assume that tasks are independent to each other for simple cloud based application. But in case of complex cloud application contains multiple subtasks and inters communication among them is required. This is challenging to develop suitable scheduling algorithm for dependent task. Game theory[2] is used to solve optimization problem for resource allocation in network systems. But in case of multiple QoS limited cloud services need more development. Application should be written in such a way to execute in the cloud efficiently to recover failure. Cloud consumer and service provider are agreed upon on common contract which is known as Service Level Agreement (SLA) to provide better QoS at minimum cost through proper and efficient resource sharing/allocation. So, for complex cloud application services it is not so easy either for consumer or service provider to maintain performance and QoS. When different number of cloud consumer tasks, cloud resources and virtualized servers are present then NP- complete (non deterministic polynomial time) problem can be used to assign tasks to different servers. To maintain QoS, assignments/ mapping of tasks as per their demand of resources is the most important thing. So, to find out optimal resource allocation we can use NP-complete. Game theory is used to determine Optimal Request Dispatching, Optimal Resource Allocation and Offloading Probability Decisions. By payoff matrix and optimum strategies, we can determine fair resource allocation against minimum energy loss. Game theory is of two types - non-cooperative games and cooperative games. In cooperative games, the decision maker involves in an agreement which is called a binding agreement. Here each decision maker consider others decision. In non-cooperative games, each decision maker makes decisions only for his own benefit. The system then reaches the Nash equilibrium, where each decision maker makes the optimized decision that means no one can be benefited without damaging others. Cloud service providers provide storage and other resources to cloud consumers. So, game theory makes strategy for allocation of storage resources to the nearest server to reduce communication cost and transport delay. For non-multiplexing and multiplexing resource allocation, we can use Nash equilibrium along with Evolutionary mechanism to achieve best optimization by changing the strategies of different participants. So, to find feasible solution for resource allocation game we can use Nash Equilibrium. For best resource allocation we can implement fuzzy logic followed by theory of evidence for service selection and then game theory strategy to define best storage service provider for better QoS at minimum cost. So, dynamic resource allocation and selection of vendor can be done through proper and correct mathematical decision model based on MCDM [2]. There are different MCDMs such as TOPSIS [1], MAUT, CBR, SMART & AHP [7]. Among these AHP rather fuzzy AHP will be more effective to prepare decision model. We can use it for performance type problems such as vendor selection and resource management i.e. dynamic resource allocation in cloud computing environment. AHP is more flexible and has ability to check inconsistencies in case of tangible and non-tangible factors. Generally, AHP is use for pair wise comparisons where more alternatives are available with respect to criteria and to determine priority scales. This technique uses weight priorities and makes rank for consistency ratio to develop multi-criteria decision hierarchy by compare alternatives. AHP [4] is unable to handle those criteria which are independent to each other. For supplier selection AHP can be used in following steps: a) Define unstructured problem with goals and results. b) Decompose Hierarchical model with criteria and alternatives. c) Pair wise comparisons through Fuzzy Triangular Method. d) Eigen value and Vector calculation for finding the weights of decision parameters. e) Find the ranking of alternatives. Here AHP technique has been used in fuzzy environment. 3. Case Study In this case study, four preference criteria were selected for vendor selection problem on the basis of questionnaires of some experts. C1:-- Reliability. C2:-- Bandwidth. C3:-- Completion time. C4:-- Cost. The weight of the criteria is found by the Analytic Hierarchy Process (AHP) under fuzziness by a group of expert. The weight of the criteria and their calculation are presented in table 1 & table 2. The initial decision making matrix, normalized decision making model and weighted normalized decision making model are presented in table 3, table 4 and table 5 respectively. The matrix game (two person zero sum) game is applied here for the selection of the alternative. The alternatives are assigned to the first player s strategies and the criteria are assigned to the second player s strategies. For the payoff function as a dimensionless evaluation numbers are used. For the decision making, there is a normalized matrix of performance (Table 4) and a weighted normalized matrix of performance (Table 5). The initial values of all criteria of the decision making matrix are normalized as: The criteria whose preferable values are minimum are normalized by applying two stage procedures as follows: (b) Then normalized as before by According to expert s decision, the following matrix is formed and then by using Triangular Fuzzy Number the Fuzzy evaluation matrix is formed I. Evaluation Matrix

32 28 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 29 II. Fuzzy Evaluation Matrix Value of the game: Player 1: (0, , 0, 0, ) Now calculating all the values by applying Chang s [4] theory the following results are obtained: Player 2: (0, 0, , ) It is well known that a strategic game is a model of interacting decision makers [22]. In this case study, the decision makers are considered as players. A strategic game consists of a set of players; for each player, a set of actions are defined; for each player, preferences over set of action profiles are given. The optimal value of the game is As a calculation result there are received the following vector as: * S 1 = ( 0, , 0, 0, ), for the first player * S 2 = ( 0, 0, , ), for the second player. Result Analysis The calculation of the result shows that the value of the second alternative (Vendor 2) equals to 82% and the value of the fifth alternative (Vendor 5) equals to 18% (approx).the first, third and fourth alternatives are not involved into the determination of the equilibrium point because their functional influence is lower and they are dominated by the other influence. Again, the assessments of the criteria are represented by the optimum strategy for the second player. Here the criterion 3(Completion time) equals to 54% and the criterion4 (Cost) is valued by 46% (approx). The criteria 1 & 2 are not involved into the calculation of the equilibrium point, because they are dominated by criteria 3 & 4. Minimum of all values (0.85, 1, 0.67, and 0.4) The weight W = (0.29, 0.35, 0.22, 0.15) There are available different vendors in an industry. To ranking of the vendors four criteria are identified. 5. Conclusions This paper proposed the fuzzy AHP based decision making model to select a suitable cloud vendor in SaaS platform. The four criteria reliability, bandwidth, completion time and cost were identified for decision making and development of hierarchy model. The importances of four criteria were determined by fuzzy AHP and other attributes are not included. References 1. Liao, C.N., and Kao, H.P., (2011) An Integrated Fuzzy TOPSIS and MCGP Approach to Supplier Selection in Supply Chain Management, Expert Systems with Application Vol.38 (9), Lee, S., Seo, K.-K.: A Decision-making Model for IaaS Provider Selection. Proceedings of the 3rd International Conference on Convergence Technology, pp (2013) 3. Saaty, T.L. (1990) How to Make a Decision: The Analytic Hierarchy Process. European Journal of Operational Research, 48, (90)90057-I 4. D.Y. Chang, Theory and Methodology Applications of the extent analysis method on fuzzy AHP,European Journal of Operational Research 95, (1996). 5. Buyya, R., Yeo, C.S., Srikumar, V., James, B. and Ivona, B. (2009) Cloud Computing and Emerging IT Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th Utility. Future Generation Computer Systems, 25, Buyya, R., Broberg, J. and Goscinski, A. (2011) Cloud Computing: Principle and Paradigm. John Wiley& Sons, Hoboken.

33 30 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Saaty, T.L. (2003) Decision-Making with the AHP: Why Is the Principal Eigenvector Necessary. European Journal of Operational Research, 145, (02) S. O. Kuyoro, F. Ibikunle and O. Awodele, Cloud Computing Security Issues and Challenges, International Journal of Computer Networks (IJCN), vol. 3, Issue 5, (2011). 9. Brett Snyder, Jordan Ringenberg, Robert Green, Vijay Devabhaktuni and Mansoor Alam Evaluation and design of highly reliable and highly utilized cloud computing systems Journal of Cloud Computing: Advances, Systems and Applications (2015) 4:11 DOI /s Ziqian Dong, Ning Liu and Roberto Rojas-Cessa Greedy scheduling of tasks with time constraints for energy-efficient cloud-computing data centers Journal of Cloud Computing: Advances, Systems and Applications (2015) 4:5 DOI /s y. 11. Ward and Barker Observing the clouds: a survey and taxonomy of cloud monitoring Journal of Cloud Computing: Advances, Systems and Applications (2014) 3:24 DOI /s Justice Opara-Martins, Reza Sahandi and Feng Tian Critical analysis of vendor lock-in and its impact on cloud computing migration: a business perspective Journal of Cloud Computing: Advances, Systems and Applications (2016) 5:4 DOI /s z. R. Nicole, Title of paper with only first word capitalized, J. Name Stand. Abbrev., in press. Abstract Visual Brain Computer Interface Suhel Chakraborty, Priyajit Ghosh Dept. of Computer Science and Engineering Jalpaiguri Govt. Engg. College Jalpaiguri, West Bengal priyajitghosh1@gmail.com This paper intends to show a methodology of to implement a brain computer interface, with the design and installation of a low cost electroencephalogram and a series of self- designed tools both in software and hardware so as to achieve the target of hands free navigation as a present prospect. Keywords : Brain computer interface, electroencephalogram or sensors, filters, instrumentation amplifiers, ADC to USB transceiver, processor and analyser 1. Introduction As the proliferation of technology dramatically infiltrates all aspects of modern life, in many ways the world is becoming so dynamic and complex that technological capabilities are overwhelming human capabilities to optimally interact with and leverage those technologies. Fortunately, these technological advancements have also driven an explosion of neuroscience research over the past several decades, presenting engineers with a remarkable opportunity to design and develop flexible and adaptive brain-based Neuro-technologies that integrate with and capitalize on human capabilities and limitations to improve human-system interactions. Major forerunners of this conception are brain-computer interfaces (BCIs), which to this point have been largely focused on improving the quality of life for particular clinical populations and include, for example, applications for advanced communications with paralyzed or locked in patients as well as the direct control of prostheses and wheelchairs. Near-term applications are envisioned that are primarily task oriented and are targeted to avoid the most difficult obstacles to development. In the farther term, a holistic approach to BCIs will enable a broad range of taskoriented and opportunistic applications by leveraging pervasive technologies and advanced analytical approaches to sense and merge critical brain, behavioural, task, and environmental information. Communications and other applications that are envisioned to be broadly impacted by BCIs are highlighted; however, these represent just a small sample of the potential of these technologies. However this paper of ours aims to show a design of a lost cost but effective electroencephalogram and its usage to implement a brain computing interface. 2. Background This paper is intended to show a methodology that takes in Analog brain signal feeds and convert them into corresponding digitised signals after proper amplification and filtration, so that after the pattern recognition of the signals we can feed them into a computer through a software for hands-free navigation. And the aims of this paper are as follows: 1. To design a low budget EEG machine. 2. Brain signal analysis and processing. 3. Pattern recognition and machine learning. 4. Brain computer interface implementation.

34 32 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 33 We have broken down the entire design into four parts namely: 1. Sensors 2. Amplifiers and filters 3. ADC to USB transceiver 4. Processor and analyser. 3. Proposed technique This part will deal with the process that has been planned to implement and a flow diagram for the same is as follows: Sensors: The sensor is a cap shaped membrane fitted with circular disc electrodes on the outside. Fine insulated copper wires of 36 SWG are used to connect them with the signal bus which sits on the lower rear part of the membrane. Field effect is used to pick up emitted brain waves which is why larger surface area (approx 1.5 sq. cm) is used for the electrodes. Amplifiers: This is used to amplify the analog signal received from the previous stage linearly with Common Mode Rejection Ratio to filter hard to remove noise. Fig 1: The instrumentation amplifier used in the project ADC to USB Transceiver: The ARM7TDMI-S core LPC2148 at the heart of this module samples the signal at 500 k samples/s and sends the data to a PC through USB using the isochronous transfer protocol. Fig 2: ADC to USB Transceiver (used in the project) Processor and analyser: This is the software only stage running completely on a Windows based PC. This stores, accesses clustered data samples and runs complex pattern and frequency analyzer algorithms, to predict thinking patterns for identifying the motive and with at least 80% success accomplish the intended task. It also provides API for end users to incorporate inside their apps for extension of application Fig 3: The simplified flow diagram As, the flow diagram suggests, the whole procedure is divided into the following steps:- 1. Analog brain signal acquisition from 14 channel EEG(which is still under development).here in this stage we plan to extract brain signals within the range of 3-40 Hz which is approximately comprising the spectrum starting with the delta waves and ending the gamma waves. Fine insulated copper wires of 36 SWG are used to connect them with the signal bus which sits on the lower rear part of the membrane. Field effect is used to pick up emitted brain waves, which is why larger surface area (approx 1.5 sq. cm) is used for the electrodes. 2. Second stage is the amplification phase where we used instrumentation amplifiers with CMRR (Common Mode Rejection Ratio) to filter out hard to remove noise. The signal follows through several active filter stages where a steep noise reduction is performed on and from 30 Hz. We tried to take in maximum amplitude of 3V in a line of 3.3V. 3. In the third stage we tried to filter out the noise using notch filters and the permitted bandwidth was till 43 Hz anything above was filtered out. 4. The fourth stage comprises of the ADC to USB Transceiver using the LPC 2148 ARM7TDMI core. The ADC samples at 500kHz, and the USB is configured to have an 8 bit single up isochronous endpoint and the main processor clock is 60 MHz. 5. In the fifth stage we send the discrete sampled data to the PC with the help of USB 2.0 full-speed. 6. Now the discrete data will be subjected to K-means clustering and once the clustering is done it will be set for machine learning using the VISUAL BCI SOFTWARE. 7. In the discrete processing phase we will integrate the whole system and try some test cases and once it is done the project will be fulfilled and an end user interface will be generated. 4. Conclusions The present status of the project stands at: 1. The ADC to USB transceiver is ready 2. The instrumentation amplifiers are ready 3. The software based spectrum analyser is ready 4. The EEG and the algorithms for pattern recognition needs to be bettered.

35 34 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 35 Current prospects include 1. Brain signal analysis and machine learning 2. Control clicks (or cursor movement) through brain waves. References 1. D. Banks, L. House, F. R. McMorris, P. Arabie, W. Gaul (Eds.). Classification, Clustering, and Data Mining Applications,1st edition, Proceedings of the meetings of the (IFCS), Illinois Institute of Technology, Chicago, Dantanarayana, Navini, (2014) Master s Thesis, Department of Computer Science, Colorado State University, Fort Collins, CO. 3. EEG Subspace Analysis and Classification Using Principal Angles for Brain-Computer Interface - Ashari, R. and Anderson, C., In Processing of the 2014 IEEE Symposium on Computational Intelligence in Brain-Computer Interfaces (CIBCI), DOI: /CIBCI , pp , Jan Axelson. USB Complete: The developers Guide, 4th edition, Lakeview Research LLC publishers. 5. Huggins, J., Guger, C., Allison, B., Anderson, C., Batista, A., Brouwer, A., Brunner, C., Chavarriaga, R., Fried-Oken, M., Gunduz, A., Gupta, D., Kübler, A., Leeb, R., Lotte, R., Miller, L., Müller-Putz, R., Rutkowski, T, Tangermann, M., Thompson, D. Brain-Computer Interfaces, vol. 1, no. 1, pg Taylor & Francis, Implementation of Cloud Based Centralized C, C++, Java Compiler Abstract Arpit Sanghai 1, Snehasish Das 2, Simon Lepcha 3 Computer Science and Engg. Jalpaiguri Govt. Engg. College, Jalpaiguri, India, 1 arpitsanghai.id@gmail.com, 2 snehasishjames007@gmail.com, 3 ceemonlepcha1996@gmail.com Cloud computing involves accessing the computational resources over a computer network instead of a local machine. Also, every programming language requires a compiler to be installed on the machine where the code is to be run. Today, especially in educational institutions, a major drawback faced is that a compiler needs to be installed manually on every system which physically requires space and also appropriate configuration. Another major drawback that exists is that we need to install a different complier for each language on which we wish to work. The specifications of the system on which the compiler is installed also plays a vital role with regards to the speed of compilation of code. Keeping these issues in mind we propose a solution in the form of a cloud based compiler. Our aim is to build a system which allows centralized compiling of code and helps to reduce the problems of installation of compiler and storage space by making use of the available server, so that code can be easily written and compiled without any kind of system dependencies and storage constraints. This system would allow compilation of codes written in C, C++ and Java from multiple machines without the burden of installing any compiler software on each of those machines. Keywords : Cloud Computing, compiler. 1. Introduction The National Institute of Standards and Technology (NIST) defines Cloud Computing as a model for enabling easy, ondemand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Examples of cloud services include online store house, social media sites, webmail and online commercial applications. The cloud computing model allows access to information and computational resources from anyplace that a network connection is usable. On the other hand a complier is a software program that transforms high level source code that is written by the developer into low level object code which is then executed by the computer. This is mainly done in order to create executable files which can then be run in order to execute the program and its instructions. The main motivation behind our project is that most computers in the labs of educational institutions ace problems when running multiple compilers. It lacks both in system requirement and storage space. In addition to it, installing compilers on each individual machine and updatation of those compilers, in case a new update is released, is a tedious job. The solution that we propose focuses on an online compiler that helps to reduce the problems faced due to inadequate computational resources and storage space by using the concept of cloud computing. We intend to construct a web -based application that can be used remotely via a network connection to compile codes. The errors / output of the compiled program can be easily conveyed to the user. Also, the tedious job of installing a compiler on every machine is avoided. Hence, our Cloud-based compiler is primarily concerned with providing a platform to compile and run programs that are independent on any stage related disability or complexity. The compilers are hosted on a private cloud. So that users can easily write and execute programs and

36 36 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 37 view the output. Besides providing a centralized compiling scheme for organizations or institutions, the other major advantage that this system will have over the others is that it will make the users system lightweight i.e. there will be no need to maintain separate compilers/sdk s at the client-side. Thus, for educational institutions this will prove to be highly efficient. 2. Literature Survey A considerable amount of previous work has been done in this domain. Several successful attempts have been made which further encourage us to pursue the domain of work. Mayank Patel in [1] had proposed a similar solution to the one that we have proposed. His work was basically focused on only providing a java compiler over the cloud. Mehare Suraj, Paliwal Poonam, Pardeshi Mangesh, Begum Shahnaz in [2] worked on the concept of providing a centralized compiler over a private cloud for a limited number of user or for a particular institution. In [3] there is a report of a survey conducted on the feasibility of providing online compilers for languages. A. Rabiyathul Basariya, and K. Tamil Selvi in [4] again aim to provide centralized compiler specifically for C Sharp language. In [5] again a similar approach to centralized compiler over the cloud was proposed. Grobauer, B. Walloschek, T. Stocker in [6] conducted a study on the possible cloud computing vulnerabilities. This is an important aspect since our entire project will be cloud dependent and hence having an idea about the possible vulnerabilities would further help us in the process. The researches on [7] and [8] also majorly focus on centralized compiler. In [8] they talk about a generalized cloud based compiler where as [7] specifically focuses on Cloud documentation and Centralized compiler for Java and PHP. A brief review of all these literary materials have helped us formulate the methodology that we are going to implement in our project. Many of the works have focused on languages other than the ones that we aim to focus on. The languages that we focus on are specifically C, C++ and JAVA. The literature review has enabled us to investigate the challenges and accordingly formulate a methodology to deal with the challenges and accordingly proceed ahead towards the completion of the project. Save: This module permits the clients to save their activities and records. These files are put away in File Database. Compile: This module permits the clients to compile their code by invoking a compiler. The compilation result will be shown to the client. Run: This module allows the clients to run the compiled code. The output will be shown to the client. 4. Results We have developed a Software as a Service that successfully compiles and runs Java, C and C++ programs. The client can login and select the desired compiler, write codes and compile them. The software shows the result of compilation and if it is successful the client can run the program and see the result in the output box. Any input needed by the program can be provided inside the input box. Snapshots of the system are provided in figure 1 and Proposed Methodology The whole system consists of two parts namely Server-End and Client-End. The functionalities of these two parts are discussed as follows: Client-End: HTML and PHP are used to design the interface through which the client interacts with the system. The client enters the username and password to login if he is already registered otherwise the client needs to register and then login. Server-End: The implementation of cloud is done with the help of the server. The computational resources of the server are used by the client connected to it over a network. The server will contain a User Details Database that consists of the username, password and basic details of registered users and File Database that will contain the project files saved by the users. Code written in C# is used to validate the user credentials that are entered by the user in order to login. The web application, created using ASP.NET(C#) will be hosted on IIS Express server. The backend code which triggers the compiler is written using C#. Fig 1: A snapshot of the system while executing Java code The System that we intend to build is solely a cloud based application. The application is divided into several modules where each module provides a specific functionality to the system and further strengthens the implementation of a compiler over cloud. The main functionalities of the different modules are as per the following: Register: Client must register in order to login to the system. Login: Registered client must login with his username and secret password. This module executes login and client verification process utilizing User Details Database. Create new File/Program: This module permits clients to make new file or program with the name of their choice. Documents are put away in Files Database. Open File/Program: This module permits clients to open previously saved files or programs. Delete File: This module allows the clients to delete particular programs or files that they have created earlier. Fig 2: A snapshot of the system while executing C code

37 38 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Conclusion Although using the concept of cloud solves the problem of installation and updatation of compilers on individual machines, one problem still remains in terms of the resource availability. Deployed from a simple server this Software might not be able to tackle the workload from numerous machines submitting code for compilation at once thus we are contemplating the idea of implementing a Cluster in order to speed up the process of compilation. Along with that the aspects of saving files and opening previously saved files by an existing client is still yet to be implemented in our software. References 1. Mayank Patel, Online Java Compiler Using Cloud Computing, International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: , Volume-2, Issue-2, January, M. Suraj, P. Poonam, P. Mangesh, B. Shahnaz, Private Cloud Implementation for centralized Compilation, International Journal of Soft Computing and Engineering (IJSCE) ISSN: , Volume-3, Issue-5, November P. doke, S. Shingote, S. Kalbhor, A. Singh, H. Yeole, ONLINE C, C++,JAVA COMPILER USING CLOUD COMPUTING - A SURVEY, International Journal of Advances in Engineering Science and Technology 318 ISSN: A. Rabiyathul Basariya, and K. Tamil Selvi, Centralized C Sharp compiler using cloud computing, International Journal of Communications and Engineering, vol. 06-no.6, Issue: 02, pp , Mar A. Nizam Ansari, S. Patil, A. Navada, A. Peshave, V. Borole, Online C/C++ Compiler using Cloud Computing, Multimedia Technology (ICMT), July 2011 International Conference, pp B. Grobauer, B. Walloschek, T. Stocker, Understanding Cloud Computing Vulnerabilities, E. Security and Privacy, IEEE Vol 9 Issue 2 March-April N. Raut, D. Parab, S. Sontakke, S. Hanagandi, Cloud Documentation and Centralized Compiler for Java and Php, International Journal Of Computational Engineering Research (ijceronline.com) Vol. 3 Issue S. Abdulla, S. Iyer, S. Kutty, CLOUD BASED COMPILER, International Journal of Students Research in Technology and Management Vol 1(3), May 2013, ISBN , pg Abstract Comparison of Various Methods to Predict the Occurrence of Diabetes in Female Pima Indians 1 Somnath Rakshit, 1 Suvojit Manna, 1 Riyanka Kundu, 1 Sanket Biswas 1 Priti Gupta, 2 Sayantan Maitra, 1 Subhas Barman 1 Jalpaiguri Government Engineering College, Jalpaiguri, West Bengal, India 2 Institute of Pharmacy, Jalpaiguri, West Bengal, India Diabetes is one of the most dreadful diseases affecting humankind in several countries. People from various places are working to find different methods to avert this disease at a premature stage by anticipating the symptoms of diabetes. In this paper, our main aim is to predict the onset of diabetes amongst women aged at least 21 using binary classification and compare the results. We have compared the performance of algorithms those are used to predict diabetes using data mining techniques. In this paper we compare machine learning classifiers (Decision Jungle, Decision Forest, Decision trees Perceptron, Bayes Point Machine, Logistic Regression, Support Vector Machines, Neural Networks) to classify patients with diabetes type 2. These approaches have been tested with the Pima Diabetes data samples downloaded from the UCI Machine Learning data repository. The performances of the algorithms have been measured and compared in terms of accuracy and recall. The model can be used by the endocrinologists, dieticians, ophthalmologists and podiatrists to predict if or if not the patient is likely to suffer from diabetes. Keywords : Decision Tree; Decision Jungle; Decision Forest; Logistic Regression; Perceptron; Support Vector Machines; Neural Networks; Diabetes 1. Introduction Diabetes Mellitus is one of the most serious challenges that are being faced in both developed and developing countries. Medical history data comprises a number of tests essential to diagnose a particular disease and the diagnosis are based on the experience of the physician; a less experienced physician can diagnose a problem incorrectly. Data mining applications can prove to be a huge benefit to all parties in the healthcare industry. In healthcare, there is a vast amount of data. So, there is a need to generate a powerful tool for analyzing and extracting important information from this complex data. This can help us in limiting costs, enhancing profits, and maintaining high quality of patient care. Here we have focussed our attention on prediction of diabetes mellitus and providing a detailed performance analysis of all the techniques we have applied in our paper. People with diabetes have a high risk of developing a number of serious health problems. Diabetes mellitus is classified into two broad categories: Type 1 and Type 2. Diabetes mellitus type 1 is a form of diabetes mellitus in which not enough insulin is produced. High blood sugar levels are usually a result of lack of proper insulin secretion in our body. The classical symptoms are frequent urination, increased thirst, increased hunger, and weight loss. Diabetes mellitus type 2 is a long term metabolic disorder that is characterized by high blood sugar, insulin resistance, and relative lack of insulin. Common symptoms include increased thirst, frequent urination, and unexplained weight loss.

38 40 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 41 The risk of long term complications are increased by all form of diabetes. Consistently high blood glucose levels can lead to serious diseases affecting the heart and blood vessels, eyes, kidneys, nerves and teeth. Diabetes is a leading cause of many diseases like vision ailments, kidney complications, etc in the developing countries. Hence, it has become essential to develop predictive models using the risk factors that lead to the occurrence of diabetes. Many studies have suggested statistical models as predictors [2][3]. Data mining predicts the future by creating a predictive model. The data mining process for diagnosis of diabetes can be divided into five steps, though the underlying principles and techniques used for data mining diabetic databases may differ for different projects in different countries [4]. The main goal of the data mining process is to excavate valuable facts from a data set and transfigure them into an interpretable form for using them in future. Data mining predicaments are often solved using various approaches from both computer sciences, such as soft computing, multidimensional databases, data visualization and machine learning; and statistics, including regression, classification, hypothesis testing and clustering techniques. In recent years, data mining has been used extensively in the fields of engineering and science, such as medicine, bioinformatics, education and genetics. 2. Proposed Method 2.1 Methodology Many classification algorithms have been used in various applications of medical science. The training phase is the first step of the classification of data, which is a two phase process in itself. In this stage, a classifier with the training set of tuples is build by the classifier algorithm. The second phase is the classification phase where classification uses the training model. The performance of the model is then analyzed with the testing set of tuples [6]. To know exactly how data is being classified, classification is done. In general, a list of Machine Learning algorithms is operated on a model and then it is run multiple times after improvising algorithm parameters or input data weight to increase the accuracy of the classifier [7]. We follow five basic steps to construct a Machine Learning model and train, and score our model: Create a model Step 1: Collecting the data Step 2: Preparing the data Step 3: Selecting definitive features for the data Training the prediction model Step 4: Choosing and applying a machine learning algorithm Scoring and testing the prediction model Step 5: Predicting if the patient is diabetic or not. Step 1: Getting the data We used the Pima Indians Diabetes Binary Classification dataset for preparing the Prediction Model. This dataset includes entries for various individual Pima female Indians, including information such as Insulin level, Glucose concentration, BMI, Age, etc. Step 2: Prepare the data Some pre-processing is usually required before a dataset can be analyzed. There are some missing values present in the columns of various rows. These missing values needed to be cleaned so that the model can analyze the data correctly. So, we removed all those rows that had missing values. Also, we excluded those columns from the model which had a large proportion of missing values. Then, we scaled all data which are given in Table 1. Table 1: Features of our dataset Scaling Process Feature Name Feature ID F1/10 Number of Pregnancies F1 F2/100 Age F2 Ignored Skin Thickness F3 F4/80 Diastolic Blood Pressure F4 (F5-18)/(25-18) Body Mass Index F5 Unchanged Diabetes Pedigree Function F6 (F7-50)/(120-50) Plasma glucose concentration a 2 hours in an F7 oral glucose tolerance test (F8-14)/(250-10) 2-Hour serum insulin (mu U/ml) F8 Step 3: Define features In machine learning, features are individual measurable properties of something to be interested in. In our dataset, each row represents one female, and each column is a feature of that female. We found a good set of features for creating the predictive model. It required experimentation and knowledge about the problem statement. We examined that some features were better for predicting the target than others. Also, some features had a strong correlation with other features and hence we removed them. Step 4: Choose and apply a Learning algorithm Now that the data was ready to be used, we had to construct a predictive model consists of training and testing. We first used our data to train the model, and then, we tested the model to see how closely it was able to predict the result. We used our data for both training the model and testing it by splitting the data into separate training and testing datasets. We used 80 percent of the data to train the model, and held back 20 percent for testing. Step 5: Predict whether diabetic or non-diabetic Now that we had trained the model using 80 percent of our data, we used it to score the other 20 percent of the data to see how well our model functions. The output showed the predicted classification for the females and the known values from the test data. Finally, we test the quality of the results with Accuracy, Precision and Recall. The following algorithms have been considered for our performance analysis for prediction of diabetes: 1) Two-Class Locally Deep Support Vector Machine Support Vector Machines are linear predictive models. But using two class locally deep SVM a bit of non-linearity can be added to the linear model by means of using neural network in its kernel function. This leads to improved speed in prediction while maintaining the nearly same accuracy levels. Also these are optimized to be scalable for larger dataset [8]. 2) Two-Class Averaged Perceptron The averaged perceptron method is an early and very simple version of a neural network. In this supervised learning method, inputs are classified into several possible outputs based on a linear function, and then combined with a set of weights that are derived from the feature vector hence the name perceptron. Perceptron is basically a two layered neural network and are thus linear model. It is an online model that processes features serially. They are generally faster than neural network and is hence limited to linearly separable features.

39 42 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 43 3) Two-Class Decision Forest The decision forest is a classic classification algorithm. The algorithm works by building multiple decision trees and then takes weighted decision and generating an unbiased estimate on the generalization error as the forest is built. Voting is a form of aggregation, in which each tree in a classification decision forest outputs a non-normalized frequency histogram of labels. The aggregation process sums these histograms and normalizes the result to get the probabilities for each label. The decision forest algorithm does not overfit while being fast and scalable for large datasets and can handle thousand of variables without deletion. Decision trees have many advantages: They can represent non-linear decision boundaries. They are scaleable, efficient and fast. They perform integrated feature selection and classification. They are resilient in cases of noisy dataset. 4) Support Vector Machine Support Vector Machine (SVM) can also be used as a classification algorithm. It predicts the class by fitting a hyperplane in a multidimensional feature space. SVM supports maximum prediction accuracy and is also resilient against overfitting. The SVM provides empirically good performance in the field of bioinformatics, text and image recognition. 5) Two-Class Decision Jungle Decision jungles is derivative of random forest. A decision jungle unlike decision forest may contain multiple path to the decision nodes instead of a single path thus forming directed acyclic graphs (DAGs) [8]. Decision jungles have the following advantages: By allowing tree branches to merge, a decision DAG typically has a lower memory footprint and better generalization performance than a decision tree, albeit at the cost of somewhat longer training time. Decision jungles are non-parametric models that can represent non-linear decision boundaries. They perform integrated feature selection and classification and are resilient in the presence of noisy features. 6) Logistic Regression Logistic regression is a statistical method that uses logic functions to predict the value of the dependant variable. Binomial and Multinomial are two types of logistic regression. Binomial regression is used for the prediction of result for a dependent variable when there are only two possible outcomes while multinomial regression is used in situations when there are three or more outcomes of the dependant variable. In this experiment, we have used binomial regression which have used two outcomes: either the female is diabetic or non-diabetic. 3. Performance Evaluation and Results A. Dataset and Measures In this method, we make use of the Pima Indian diabetes dataset which has been collected from UCI repository and is classified under two methods diabetic and non diabetic. This data set consists of 8 attributes and 1 class. This data set contains 768 instances [5]. Out of all these, 392 rows contained data without any missing values. The train data consists of 313 entries and the test data has 79 entries. For Tested Positive for Diabetes, the class values are interpreted as 1 and for Tested Negative for Diabetes, the class values are interpreted as 0 in class distribution. B. Experiment Results For our experiment, the tools we used are R, SQL, Python within Microsoft Azure Machine Learning Studio and our results are compared in it. The criterion that we took for our comparison of classifiers are accuracy, precision and recall. In simple terms, high accuracy means that most of the measurements of a quantity are close to the quantities true value. High precision means that the paradigm returned more substantially relevant results than irrelevant ones, while high recall means that the paradigm returned most of the relevant results. Our classification measures are evaluated using true positive (TP), true negative (TN), false positive (FP) and false negative (FN) [4]. The accuracy of different prediction methods are given in Table 3. Table 2: Confusion matrix Actual vs Predicted Positive Negative Positive TP TN Negative FP FN Accuracy=(TP+TN)/(TP+FP+TN+FN) Precision=TP/(TP+FP) Recall=TP/(TP+FN) Table 3: Performance comparison between different algorithms Recall rate Precision Accuracy rate Algorithm used Serial No. 62.5% 78.9% 83.3% Two-class locally deep Support Vector Machines % 83.3% 84.6% Two-class Averaged Perceptron % 66.7% 78.2% Two-class decision forest % 82.4% 83.3% Two-class Support Vector Machines % 81.3% 82.1% Two-class decision jungle % 83.3% 84.6% Logistic regression % 78.9% 83.3% Two-class Neural Network 7. 7) Two-Class Neural Network Set of interconnected layers, in which the inputs lead to outputs by a series of weighted edges and nodes are known as twoclass neural network. The weights on the edges are learned when training the neural network on the input data. The direction of the graph starts from the input layer and proceeds through the hidden layer, with all nodes of the graph connected by the weighted edges to nodes in the next layer. Most predictive tasks can be accomplished easily with only one or a few hidden layers.

40 44 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Conclusion To summarise, we have compared different (seven) prediction models for predicting diabetes mellitus using 8 important attributes. We observe that out of seven prediction methods, Two-class Averaged Perceptron and Logistic Regression methods have been achieved the highest accuracy of 84.6%. To select best classifier for predicting diabetes, this study can be used. This model might be used to identify people at high risk of developing diabetes and provide timely intervention in its treatment to the women of our society, aged 21 or above. In this way, healthcare systems can help prevent early deaths and lower health risks by engaging such predictive techniques. Targeted prevention strategies can be planned to ensure decent attempts to be taken for prevention in individuals at high risk. This paper presents an approach that can be used for hybrid model construction of community health services. These classification algorithms can be implemented for other dominant diseases prediction and classification with their suitable data sets. An another scope is to seeing whether by applying new algorithms will made any improvements over techniques which are used in this paper in future. References 1. Jia Z, Zhou Y, Liu X, Wang Y, Zhao X, Wang Y, Liang W, Wu S. Comparison of Different Anthropometric Measures as Predictors of Diabetes Incidence in a Chinese Population. Diabetes Research and Clinical Practice, 2011; 92: Encyclopedia of Data Warehousing and Mining, Edited by John Wang- Idea Group Publishing, PCK Edition Lily T, Hossein M, Omid H, Jalal P. Real-Data Comparison of Data Mining Methods in Prediction of Diabetes in Iran. Healthcare Information Research, 2013; 19: UCI Repository of Machine Learning Databases, University of California at Irvine, Department of Computer Science 5. Mitchell TM. Machine learning. Boston, MA: McGraw-Hill, Gaber, Mohamed Medhat, Arkady Zaslavsky and Shonali Krishnaswamy. Mining data streams: a review. ACM Sigmod Record 34.2 (2005): Varma, Manik. Local deep kernel learning for efficient non-linear svm prediction. International Conference on Machine Learning Shotton, Jamie, et al. Decision jungles: Compact and rich models for classification. Advances in Neural Information Processing Systems Use of Error Detection and Correction in Communication Engineering Abstract Arkopal Ray B.Tech Student Department of Computer Science and Engineering Maulana Abul Kalam Azad University of Technology, arkopal0111@gmail.com In today s world of wireless communication, the basic need of any communication system is to transmit and receive the error free data through any noisy channel. Due to the advancement in the data transmission the sources of noise and interference has also increased. Many efforts have been made by engineers to meet the demand for more reliable and efficient techniques for error detection and correction in the received data. To detect and correct the errors in the data transmission various techniques are used. This review paper delivers numerous error detection and correction techniques being used since last few decades. Keywords : Error detection and correction (EDAC), Checksum, Cyclic Redundancy Check (CRC), Parity check method, Hamming code The general idea for achieving error detection and correction is to add some redundancy (i.e., some extra data) to a message, which receivers can use to check consistency of the delivered message, and to recover data determined to be corrupted. Errordetection and correction schemes can be either systematic or non-systematic: In a systematic scheme, the transmitter sends the original data, and attaches a fixed number of check bits (or parity data), which are derived from the data bits by some deterministic algorithm. If only error detection is required, a receiver can simply apply the same algorithm to the received data bits and compare its output with the received check bits; if the values do not match, an error has occurred at some point during the transmission. In a system that uses a non-systematic code, the original message is transformed into an encoded message that has at least as many bits as the original message. Error Detection Error detection is most commonly realized using a suitable hash function (or checksum algorithm). A hash function adds a fixed-length tag to a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided. There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check s performance in detecting burst errors). Basic approach used for error detection is the use of redundancy, where additional bits are added to facilitate detection and correction of errors. Popular techniques are: Simple Parity check Two-dimensional Parity check Checksum Cyclic redundancy check Simple Parity Check: The most common and least expensive mechanism for error- detection is the simple Parity check. In this technique, a redundant bit called parity bit, is appended to every data unit so that the number of 1s in the unit (including the parity becomes even).

41 46 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 47 Two Dimension Parity Check: Performance can be improved by using two-dimensional parity check, which organizes the block of bits in the form of a table. Parity check bits are calculated for each row, which is equivalent to a simple parity check bit. Parity check bits are also calculated for all columns then both are sent along with the data. At the receiving end these are compared with the parity bits calculated on the received data. Checksum: 1 In checksum error detection scheme, the data is divided into k segments each of m bits. In the sender s end the segments are added using 1 s complement arithmetic to get the sum. The sum is complemented to get the checksum. The checksum segment is sent along with the data segments as shown in Fig (a). At the receiver s end, all received segments are added using 1 s complement arithmetic to get the sum. The sum is complemented. If the result is zero, the received data is accepted; otherwise discarded. Sender data Computer parity bit Transmission Media Cyclic redundancy checks (CRCs) : A cyclic redundancy check (CRC) is a non-secure has function designed to detect accidental changes to digital data in computer networks; as a result, it is Row parities not suitable for detecting maliciously introduced errors. It is characterized by specification of what is called a generator Column parities polynomial, which is used as the divisor in a polynomial long : : : : division over a finite field, taking the input data as the dividend, Data to be sent such that the reminder becomes the result. A cyclic code has favorable properties that make it well suited for detecting burst errors. CRCs are particularly easy to implement in hardware, and are therefore commonly used in digital networks and storage devices such as hard disk drives. Even parity is a special case of a cyclic redundancy check, where the single-bit CRC is generated by the divisor x + 1. Error-correcting codes: Any error-correcting code can be used for error detection. A code with minimum Hamming distance, d, can detect up to d 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired. Codes with minimum Hamming distance d = 2 are degenerate cases of error-correcting codes, and can be used to detect single errors. The parity bit is an example of a single-error-detecting code. Receiver Accept Data Y Automatic repeat request (ARQ): ARQ is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, and timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the receiver to indicate that it has correctly received a data frame. Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e., within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions. Three types of ARQ protocols are Stop-and-wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ. Even Computer parity bit Reject Data N Original data : : : Forward error correction (FEC): The sender encodes the data using an error-correcting code (ECC) prior to transmission. The additional information (redundancy) added by the code is used by the receiver to recover the original data. In general, the reconstructed data is what is deemed the most likely original data. It is a process of adding redundant data, or parity data, to a message, such that it can be recovered by a receiver even when a number of errors (up to the capability of the code being used) were introduced, either during the process of transmission, or on storage. Since the receiver does not have to ask the sender for retransmission of the data, a back channel is not required in forward error correction, and it is therefore suitable for simplex communication such as broadcasting. Error-correcting codes are frequently used in lower-layer communication, as well as for reliable storage in media such as CDs, DVDs, hard disks, and RAM. Error-correcting codes are usually distinguished between convolution codes and block codes. Convolutional codes are processed on a bit-by-bit basis. They are particularly suitable for implementation in hardware, and the Viterbi decoder allows optimal decoding. Block codes are processed on a block-by-block basis. Early examples of block codes are repetition codes, Hamming codes and multidimensional parity-check codes. They were followed by a number of efficient codes, Reed Solomon codes being the most notable due to their current widespread use. Turbo codes and low-density parity-check codes (LDPC) are relatively new constructions that can provide almost optimal efficiency. Hamming Codes The most common types of error-correcting codes used in RAM are based on the codes devised by R. W. Hamming. In the Hamming code, k parity bits are added to an n-bit data word, forming a new word of n _ k bits. The bit positions are numbered in pattern from 1 to n _ k. Those positions numbered with powers of two are reserved for the parity bits. The remaining bits are the data bits. The code can be used with words of any length. Before giving the general characteristics of the Hamming code, we will illustrate its operation with a data word of eight bits. Consider, for example, the 8-bit data word We include four parity bits with this word and arrange the 12 bits as follows: The 4 parity bits P1 through P8 are in positions 1, 2, 4, and 8, respectively. The 8 bits of the data word are in the remaining positions. Each parity bit is calculated as follows: P1 _ XOR of bits (3, 5, 7, 9, 11) _ P2 _ XOR of bits (3, 6, 7, 10, 11) _ P4 _ XOR of bits (5, 6, 7, 12) _ P8 _ XOR of bits (9, 10, 11, 12) _ Recall that the exclusive-or operation performs the odd function. It is equal to 1 for an odd number of 1 s among the variables and to 0 for an even number of 1 s. Thus, each parity bit is set so that the total number of 1 s in the checked positions, including the parity bit, is always even. The 8-bit data word is written into the memory together with the 4 parity bits as a 12- bit composite word. Substituting the 4 parity bits in their proper positions, we obtain the 12-bit composite word written into memory: When the 12 bits are read from memory; they are checked again for errors. The parity of the word is checked over the same groups of bits, including 2 their parity bits. The four check bits are evaluated as follows: C1 _ XOR of bits (1, 3, 5, 7, 9, 11) C2 _ XOR of bits (2, 3, 6, 7, 10,11) C4 _ XOR of bits (4, 5, 6, 7, 12) C8 _ XOR of bits (8, 9, 10, 11, 12) Bit position P1 P2 1 P P Bit position = 0

42 48 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) = = = 1 A 0 check bit designates an even parity over the checked bits, and a 1 designates an odd parity. Since the bits were written with even parity, the result, C _ C8C4C2C1 _0000, indicates that no error has occurred. However, if, the 4-bit binary number formed by the check bits gives the position of the erroneous bit if only a single bit is in error. Conclusion This paper presents different techniques which are used for detection and correction of single bit error and burst errors with improvement in efficiency and reliability of data transmission. All above mentioned techniques can detect and correct errors in data bits along with the parity bits without any extra calculations. It is observed that the Hamming method can correct and detect more number of errors as compared to other techniques mentioned above. Any other correction code can be used with hamming method to further increase the number of error detection and correction which results in enhancing code rate and reducing bit overhead. References 1. Anlei Wang, Naima Kaabouch, FPGA Based Design of a Novel Enhanced Error Detection and Correction Technique, IEEE, Vol. 3, Issue No. 5, March 2008, pp Behrouz A. Forouzan Data Communication and networking 2nd edit. Tata McGraw Hill. 3. Fernanda Lima, Luigi Carro, Ricardo Reis Designing Fault Tolerant Systems into SRAM-based FPGAs Anaheim,Vol. 3, June 2003,pp Heesung Lee, Joonkyung Sung, and Euntai Kim, Reducing Power in Error Correcting Code using Genetic Algorithm, World Academy of Science, Engineering and Technology M. Imran, Z. Al-Ars, G. N. Gaydadjiev, Improving Soft Error Correction Capability of 4-D Parity Codes, IEEE transaction, Vol. 4, Issue 2, January 2009, pp M. KISHANI, H. R. ZARANDI, H. PEDRAM, A TAJARY, M. RAJI, B. GHAVAMI, HORIZONTAL-VERTICAL DIAGONAL ERROR DETECTING AND CORRECTING CODE TO PROTECT AGAINST WITH SOFT ERRORS SPRINGER SCIENCE, VOL. 15, ISSUE NO. 3-4, MAY 2011, PP NARINDER PAL SINGH, SUKHJIT SINGH, VIKRANT SHARMA, AMANDEEP SEHMBY, RAM ERROR DETECTION AND CORRECTION USING HVD IMPLEMENTATION EUROPEAN SCIENTIFIC JOURNAL,VOL. 9, ISSUE NO.33, NOVEMBER 2013,PP S. Sharma, Vijay Kumar, An HVD Based Error Detection and Correction of Soft Errors in Semiconductor Memories Used for Space Application, International conference on devices, circuits and systems (ICDCS), March 2012, pp Shubham Fadnavis, An HVD Based Error Detection And Correction Code In HDLC Protocol Used For Communication, International Journal of Advanced Research in Computer and Communication Engineering, Vol. 2, Issue No.6, June 2013, pp T. Kasarni, A. Kitai and S. Lin On the Undetected Error Probability for Shortened Hamming Codes, IEEE Transaction Communication, Vol.33, Issue No. 2, 1985, pp Y. Bentoutou, Program Memories Error Detection and Correction On- Board Earth Observation Satellites, World Academy of science, Engineering and Technology Management of Convective Heat Transfer of Cold Storage with Cylindrical Pin Finned Evaporator Using Taguchi L9 OA Analysis Dr. N. Mukhopadhyay Assistant Professor, Department of Mechanical Engineering, Jalpaiguri Government Engineering College Jalpaiguri , West Bengal, India. nm1231@gmail.com Abstract This study contains design of experiments to optimize the various control factors of a cold storage evaporator space inside the cold room, in other words the heat absorption by evaporator will be maximize to minimize the use of electrical energy to run the system. Here we have used cylindrical pin fin to maximize the heat absorption by evaporator. Taguchi orthogonal array have been used as a design of experiments. Three control factors with three levels of each have been chosen for analysis. In the evaporator space the heat absorbs by the evaporator and fins totally a convective heat transfer process. The control factors are Area of the evaporator with cylindrical pin fin(a), temperature difference of the evaporator space (dt), and relative humidity inside the cold room(rh). Different amount of heat gains in the cold room for different set of test runs have been taken as the output parameter. The objective of this work is to find out the optimum setting of the control factors or design parameters so as the heat absorb in the cold room by the evaporator will be maximum. Taguchi L9 orthogonal array along with polynomial regression analysis have been used as an optimization technique. Key words: Taguchi orthogonal array, convective heat transfer co-efficient, cylindrical pin fin area and arrangement, regression analysis, graphical representation of control factors with heat transfer. 1. Introduction Cold storages form the most important element for proper storage and distribution of vide variety of perishables like fruits, vegetables and fish or meat processing. India is the largest producer of fruits and second largest producer of vegetables in the world. In spite of that per capita availability of fruits and vegetables is quite low because of post harvest losses that account for about 25 to 30% of production. Besides, quality of a sizable quantity of products also deteriorates by the time it reaches the consumer. As India is the second largest producer (45,343,600 tonnes at 2015) of potato after China and largest producer of the ginger ( metric tonnes i.e. 34.6% of the world total) without these there are many kind of food commodities are produce in our country so demand for cold storages have been increasing rapidly over the past couple of decades so that food commodities can be uniformly supplied all through the year and food items are prevented from perishing. Besides the role of stabilizing market prices and evenly distributing both on demand basis and time basis, the cold storage industry provide other advantages and benefits to both the farmers and the consumers. The farmers get the opportunity to get a good return of their hard work. On the consumer sides they get the perishable commodities with lower fluctuation of price. Very little theoretical and experimental studies are being reported in the journal on the performance enhancement of cold storage. Energy crisis is one of the most important problems the world is facing nowadays. With the increase of cost of electrical energy operating cost of cold storage storing is increasing which forces the increased cost price of the commodities that are kept. So it is very important to make cold storage energy efficient or in the other words reduce its energy consumption. Thus the storage cost will eventually come down. In case of conduction we have to minimize the leakage of heat through wall but in convection maximum heat should be absorbed by refrigerant to create cooling uniformity thought out the evaporator space. If the desirable heat is not absorbed by tube or pipe refrigerant then temp of the refrigerated space will be increased, which not only hamper the quality of the product which has been stored there but reduces the overall performance of the plant. That s

43 50 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 51 why a mathematical modelling is absolutely necessary to predict the performance. In this paper we have proposed a theoretical heat transfer model of convective heat transfer model development of a cold storage using Taguchi L9 orthogonal array. Area of the evaporator with fin (A), Temperature difference (dt), Relative Humidity (RH)are the basic variables and three ranges are taken each of them in the model development. A graphical interpretation from the model justifies the reality. 2. Model development background 2.1 Range and parameter selection The length, breath and height of each chamber of cold storage are 87.5m,34.15m and 16.77m respectively. The three values of area of the evaporator with fin (A) of evaporator space are 8.253m2, m2 and m2 respectively. The three values of temperature difference (dt) of evaporator space are 2, 5 & 8 centigrade respectively. The three values of relative humidity (RH) of evaporative space are 0.85, 0.90 & 0.95 respectively. Table no. 1: Control factors with their range Table no. 2: Taguchi s L9 Orthogonal Array Notation Factors Unit Levels A Area of the evaporator with fin m Dt Temperature Difference C RH Relative Humidity % In this study, Mohitnagar cold storage (Jalpaiguri) & Teesta cold storage has been taken as a model of observation. 3. Model development proposed technique 3.1 Regression analysis Polynomial multiple regression analysis equation is Analytical study have been carried out using Taguchi s L9 Orthogonal array experimental design which consists of 9 combinations of area, relative humidity and temperature difference. It considers three process parameters to be varied three discrete levels. Table no. 3 Y=β 0 +β 1 X 1 +β 2 X 2 +β 3 X 3 + β 11 X 2 + β X2 + β X2 + β X X + β X X + β X X The above equation is second order polynomial equation for 3 variables. Where β are constant, X 1, X 2,X 3 are the linear terms, X 12 X 13 X 23 are the interaction terms between the factors, and lastly X 11 X 22 X 33 are the square terms. Q (heat due to convection) = response variable & A, dt, RH= predictor variable. Polynomial regression equation becomes after replacing real problem variables Q(heat due to convection) = β 0 +β 1 (A)+β 2 (dt)+β 3 (RH)+ β 11 (A)*(A)+ β 22 (dt)*(dt)+β 33 (RH)*(RH)+ β 12 (A)*(dT)+ β 13 (A)*(RH)+ β 23 (dt)*(rh)..(1) To solve this equation following matrix method is used [Q] = [β][ X ]..(2) [β]=[q] [X -1 ] where [β] is the coefficient matrix, Y(Q) is the response variable matrix; [X -1 ] is the inverse of predictor variable matrix(input variable matrix).(3) In this problem there are 3 independent variables and each variable has 3 levels and hence from the Taguchi Orthogonal Array (OA) table L9 OA is best selected. In the equation (2) X is a 10*10 matrix, β is a 10*1 matrix and Q is a 10*1 matrix. By using the L9 Orthogonal array and its iteration terms we have to find out the betas (β) values. Now there are nine test runs, so there will be nine values of Q and also nine equations. The values of A, dt, RH, A 2, dt 2, RH 2, A*dT, A*RH, dt*rh can be found out by the following table-

44 52 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 53 Table no. 4: Observation table with square and interaction terms- Fig 1: Configuration of Cylindrical pin fin A = Ab +Af 3.2 Heat energy calculation In this study heat transfer from evaporating space to refrigerant (which are in tube or pipe) only being considered. The transfer heat evaporating space to refrigerant are calculated in terms of Area of the evaporator with fin(a), temperature difference (dt) & relative humidity RH). Only convection heat transfer effect is being considered in this study. Basic equation for heat transfer Here Q conv =heat transfer due to convection & Q condensation =heat transfer due to condensation & Q T =Total heat transfer or absorb heat into refrigerant. Q T = Ah c dt + Ah m (RH)h fg Q T = [Ah c dt] + [(h c /1.005).A.RH.h fg ] Q T =Ah c [dt+(rh.h fg )/1.005] Q T = A.h c (dt + RH.h fg ) The heat transfer equation due to area of the evaporator with fin (A), temperature difference (dt) & relative humidity RH) is Q T =Ah c (dt RH).(4) [9] h c = 2.35 (calculated using this present study conditions). The final Heat Transfer equation becomes Q T = 2.35*A (dt RH).(6) Where, Ab = π*r 2 Af = [2π(D 2 /4) + (π*d) H]*n*N r = Radius of bare tube = m L = Length of bare tube = 1 m D = Diameter of cylindrical Pin fin = m H = Height of cylindrical Pin fin = 0.02 m n = Number of bare tube = 1 N = Number of cylindrical Pin fin Chain ordering Pin fin arrangement D = Diameter of fin S = Longitudal Fin Spacing (S=3D) Sb = Breadth wise fin spacing (S=2.5D) m = Margin (m=1.5d) Now, From equation number (6) we have Q1 10 values by putting the values of Area of the evaporator with fin (A) from the equation of Area(A), Relative humidity (RH) and temperature difference (dt). It is assumed that due to cylindrical fin total heat transfer area increased in simple additive manner Table no. 5: Response variable matrix 3.3 Cylindrical pin-fin The configuration of the pin is shown in Figure 4.5. The cross section is a 5 mm circle. This diameter was considered as a reference length scale. If we consider H as the height of the cylinder, the surface area can easily calculated from the following formulas: Surface Area = Areas of top and bottom +Area of the side Surface Area = 2(Area of top) + (perimeter of top)* height

45 54 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 55 Now using the equation number (3) we get the coefficient matrix [β]. We use MATLAB software to find out the coefficient matrix [β]. After putting the values of regression coefficient in equation no- (1)The final regression equation becomes- Q (heat due to convection heat transfer) = (A) (dT) (RH) (A) (A) (dT) (dt) (RH) (RH) (A) (dt) (A) (RH) (dT) (RH) This is the proposed regression equation. It establishes the relationship between the response variable Q and the predictor variables i.e. control variables A, dt, and RH. By the help of above equation the heat transfer evaporator space to evaporator coil can easily be computed. With the help of the multiple regression equation a computer program has been generated to check the effect of variations of control parameters on output variable. With the help of data sets generated by the program various graphs are drawn. Fig 3: variation of heat transfer with temperature difference. Figure 3: shows that heat absorption increase with temperature difference increase and at lower temperature difference it is more affected than the higher temperature difference. Effect of relative humidity on heat transfer rate First, the values of A and dt remain constant. Enter the value for RH within the range 0.85 to 0.95, Start with 0.85 Keeping A=8.253 and dt=5 and incrementing the value of RH by 0.005, the corresponding values of Q are. By using the above computer program data set the variation of RH with Q can be graphically represented in fig 4 4. Results and discussions The regression equation developed is simulated in computer to find the effects of the control parameters on heat transfer rate within the considered ranges. This is done by varying one parameter at a time within its range and keeping other parameters constant at their mid-level values. Effect of area of the evaporator on heat transfer rate The values of dt and RH remain constant. Enter the value for A within the range to , Start with keeping dt=5 and RH=0.90 and incrementing the value of A by 0.5, the corresponding values of Q are plotted. Figure 2: shows that heat absorption increase with area of the evaporator increases. By using the above computer program data set the variation of A with Q can be graphically represented as- Fig 2: variation of heat transfer with area of the evaporator. Effect of temperature difference on heat transfer rate The values of A and RH remain constant. Enter the value for dt within the range 2 to 8, Start with 2.0, Keeping A= and RH=0.90 and incrementing the value of dt by 0.5, the corresponding values of Q are plotted. Variation of dt with Q can be graphically represented fig 3- Fig 4: variation of heat transfer with relative humidity. 5. Conclusions The present study discusses about the application of Taguchi methodology to investigate the effect of control parameters on heat gain (Q) in the cold room, and also to propose a mathematical model with the help of regression analysis. From the analysis of the results obtained following conclusion can be drawn- 1. From the graphical analysis the optimum heat absorption by the evaporator when area is m2, temparature difference(dt) is 5 0C and relative humidity (RH) is This optimality has been proposed out of the range of [A (8.253, , ), dt (2, 5, 8), RH (0.85, 0.90, 0.95)]. 2. The proposed regression equation establishes the relationship between the response variable heat transfer Q and the predictor variables i.e. control variables A, dt, and RH. 3. Heat absorption capability of pin fin is better compared to other fins. So it is necessary use in cold storage. References 1. Hamid Nabati Optimal pin fin heat exchanger surface, Mälardalen University Press Licentiate Theses No. 88, B.Kundu, An analytica study of the effect of dehumidification of air on the performance and optimization of straight tapered fins, international communications in heat and mass transfer,2002

46 56 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) A. Patel, R. I. Patel, Optimization of different parameter of cold storage for energy conservation, International Journal of Modern Engineering Research, Vol.2, Issue.3, May-June 2012 pp Michael empiak, Brent Junge -Enhanced Heat Exchanger with Offset Spine Fin Design. International Refrigeration and Air Conditioning Conference, General Electric, United States of America. 5. Dr. N. Mukhopadhyay, Comparative Heat Conduction Model of a Cold Storage with Puf & Eps Insulation Using Taguchi Methodology.IJERA,ISSN : , Vol. 5, Issue 5, ( Part -5) May 2015, pp Dr. N. Mukhopadhyay, A Theoretical Comparative Study of Heat Load Distribution Model of a Cold Storage, International Journal of Scientific & Engineering Research, Volume 6, Issue 2, February ISSN Dr. N. Mukhopadhyay, Optimization of Different Control Parameters of a Cold Storage using Taguchi Methodology,AMSE JOURNALS 2014-Series: Modelling D; Vol. 36; N 1; pp 1-9, Submitted July 2014; Revised Jan. 12, 2015; Accepted Feb. 20, Dr. N. Mukhopadhyay, Theoretical Convective Heat Transfer Model Development of Cold Storage Using Taguchi Analysis.IJERA, ISSN : , Vol. 5, Issue 1, ( Part -6) January 2015, pp Dr. N. Mukhopadhyay-Theoretical Convective Heat Transfer Model Development of Cold Storage Using Taguchi Analysis.ISSN: , Vol.5,Issue 1,(part-6) January2015 Abstract Mobile Selection Using Vikor Method Arnab Roy 1, Sk Raihanur Rahman 1, Ranajit Midya 2, Subhabrata Mondal 3, Dr. Anupam Haldar 2* 1 Student, Department of Mechanical Engineering 2 Faculty, Department of Mechanical Engineering 3 Faculty, Department of MCA Netaji Subhash Engineering College, Kolkata An effective Multi-Criteria Decision Making (MCDM) approach has been introduced for the 4G mobile selection. Mobile selection is a Multi-Criteria Decision Making problem which is influenced by different performance criterias. MCDM methods create a ranking(ascending/descending) of the experimenting alternatives. Thus the decision making on critical analysis becomes lot easier. VIKOR (Vlse Kriterijumska Optimizacija I Kompromisno Resenje) is a branch of MCDM methods used in this present study. This study represents the selection of appropriate 4G mobile set, where four alternatives are evaluated on the basis of four properties to identify the best mobile set. In this behalf, this study deals with the application of VIKOR method which is adapted from MCDM for exploiting quantitative actual performance estimation. Keywords : Mobile Selection Problem, VIKOR Method, Multicriteria Decision Making. 1. Introduction Multi-criteria decision-making (MCDM) is a sub-discipline of operations research that apparently assess multiple critical conflicting criterias in decision making analysis (either in our daily life or in occupational workings). Conflicting criterias which are typical in assessing the options like: cost or price (usually one of the main criterias) and a number of measures of quality are typically another attribute which is totally a different aspect with the cost. For purchasing a mobile set considering a) Cost; b) RAM; c) ROM; d) Battery capacity may be some of the main criterias. And it is unusual that the cheapest mobile is the most effective, beneficial and the safest one. According to portfolio management, we are more interested to get high returns but simultaneously reducing risks. But the stocks having the potential to bring high returns, also responsible for high risks of losing money. In the service industries, the fundamental conflicting attributes are customer satisfaction and the cost of providing service. The MCDM problem is stated as follows: Determine the best (compromise) solution in multi-criteria sense from the set of J feasible alternatives A1, A2, AJ, evaluated according to the set of n criterion functions. The input data are the elements f ij of the performance (decision) matrix, where f ij is the value of i-th criterion function for the alternative Aj. A MCDM problem can be represented by a decision matrix as follows:

47 58 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 59 Here, A i represents ith alternative, i =1, 2,...,m; Cx j represents the jth criterion, j =1, 2,...,n ; and x ij is the individual performance of an alternative. The procedures for evaluating the best solution to an MCDM problem include computing the utilities of alternatives and ranking these alternatives. The alternative solution with the highest utility is considered to be the optimal solution. *corresponding author: anujuster@gmail.com Table 2: Normalized quality loss estimates (NQL) 3. Methodology using VIKOR Method The VIKOR method is a multicriteria decision making method. This method was invented by Serafim Opricovic in solving decision making problems with conflicting and non-commensurable attribute, assuming that compromise is acceptable for conflict resolution, the decision maker wants a solution that will be the closest to the ideal, and the alternatives are assessed according to all the established criterias. VIKOR does rank the alternatives and determines the solution which is named as compromise that is the closest to the ideal. In 1973, the idea of compromise solution was introduced in MCDM by Po-Lung Yu and Milan Zeleny. Serafim Opricovic developed the basic ideas of VIKOR in Ph.D. thesis in 1979 and another application was published in The name VIKOR appeared in 1990 from Serbian: VIseKriterijumska Optimizacija I Kompromisno Resenje, that means: Multicriteria Optimization and Compromise Solution, with pronunciation: vikor. The real applications of this method were presented in 1998.[ The paper in 2004 did contribute to the international recognition of the VIKOR method. (Most cited paper in the field of Economics, Science Watch, Apr. 2009).: An individual wants to purchase a 4G mobile set considering a) Cost; b) RAM; c) ROM; d) Battery capacity as selection criteria. Though there are several manufacturers of 4G set but only four models such as, a) MOTO G4 PLUS; b) REDMI NOTE 3; c) LE 2; d) REDMI 3S PRIME are identified according to their market status. The purchaser wishes to select a suitable mobile set considering the selection criteria. In this work, VIKOR method is used. Relative steps of the VIKOR algorithm were executed for available data and finally the ranking of the alternatives were assessed based on the efficiency of circular knitting machine. Table 3: Utility measures of individual criteria attribute Step 1: Decision matrix is formed on the basis of raw data. Table 1: Attributes for mobile selection criteria Here, according to the table-3: S 31 = 0.2 * {( ) / ( )} = Step 6: Calculation of the utility and regret measures for each response in each experimental run using equation (5) and (6) respectively. Table 4: Utility measures ( S i ) and regret measures ( R i ) of alternatives Step 2: Representation of normalized decision matrix The normalized decision matrix can be expressed as follows: F= [ f ij ]m * n. Here, Ai represents ith alternative, i =1, 2,...,m; j =1, 2,...,n and x ij is the individual performance of an alternative.

48 60 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 61 Here, S i [MOTO G4 PLUS] =( ) = Step 7: Computation of VIKOR index The VIKOR index can be expressed as follows: Qi = Ʋ * [(S i - S * ) / (S - - S * )] + (1 - Ʋ) * [(R i - R * ) / (R - - R * )]...(7) Calculation of VIKOR index of the ith experimental run. Substituting Si and R i into equation (7) yields the VIKOR index of the ith experimental run as follows. A smaller VIKOR index produces better multi-response performance. Table 5: VIKOR index of individual alternatives 4. Discussion & Conclusion: In the present study, VIKOR method has been used to solve MCDM problems through a case study of mobile selection. The work presents the selection of appropriate 4G mobile set, where four alternatives are evaluated on the basis of four properties to identify the best mobile set. MOTO G4 PLUS is the best alternative. The study exhibits the effectiveness of the said MCDM techniques to solve such a mobile selection problem. Relative steps of the VIKOR algorithm were executed for available data and finally the ranking of the alternatives were performed. 10. Kumar M, Vrat P and Shankar R, (2004), A fuzzy goal programming approach for vendor election problem in a supply chain, Computers and Industrial Engineering, 46, Lee-Ing Tong. Chi-Chan Chen. Chung-Ho Wang, Optimization of multi-response processes using the VIKOR method, Int J Adv Manuf Technol (2007) 31: , DOI /s Marjan Bahraminasab, Ali Jahan, Material selection for femoral component of total knee replacement using comprehensive VIKOR, Materials and Design 32 (2011) Opricovic, S. and Tzeng, G.H., Compromise solution by MCDM methods: a comparative analysis of VIKOR and TOPSIS, European Journal of Operational Research 156 (2004) OuYang, Y.P., Shieh, H.M. and Tzeng, G.H., A VIKOR technique based on DEMATEL and ANP for information security risk control assessment. Information Sciences, in press, < S > 15. Opricovic S, Tzeng GH. Compromise solution by MCDM methods: a comparative analysis of VIKOR and TOPSIS. Eur J Oper Res 2004;156: Pramanik, D., Haldar, A., Mondal, S., Naskar, S. K. and Ray, Resilient Supplier Selection using AHP-TOPSIS-QFD under Fuzzy Environment, An International Journal of Management Science and Engineering Management (Taylor & Francis), 45-54, 12(1), (Published online: 05 Jan 2016) Pramanik, D., Haldar, A. (2014). Design of Fuzzy Decision Support System (FDSS) in Technical Employee Recruitment, by International Journal of Emerging Technology and Advanced Engineering. 4(2), References 1. Chang CL. A modified VIKOR method for multiple criteria analysis. Environ Monit Assess 2009: Chatterjee P, Athawale VM, Chakraborty S. Selection of materials using compromise ranking and outranking methods. Mater Des 2009; 30: Chiu W.Y., Tzeng G.H. and Li, H.L., A new hybrid MCDM model combining DANP with VIKOR to improve e-store business, Knowledge-Based Systems 37 (2013) Datta S, Mahapatra SS, Banerjee S and Bandyopadhyay A, Comparative study on application of utility concept and VIKOR method for vendor selection, AIMS International Conference on Value-based Management, organized by Indus Business Academy India, International Forum of Management Scholars, AIMS International, held during August 11-13, 2010 at Dev Sanskriti Vishwavidyalaya, Haridwar, India. 5. Fallahpour A. R. and Moghassem, A. R., Evaluating Applicability of VIKOR Method of Multi-Criteria Decision Making for Parameters Selection Problem in Rotor Spinning, Fibers and Polymers 2012, Vol.13, No.6, Hsu, CH, Wang, FK and Tzeng, GH, The best vendor selection for conducting the recycled material based on a hybrid MCDM model combining DANP with VIKOR, Conservation and Recycling ( Elsevier), 66 (2012) Huang JJ, Tzeng GH, Liu HH. A revised VIKOR Model for multiple criteria decision making the perspective of regret theory. Cutting-Edge Res Topics Multiple Crit Decis Making 2009: Hwang CL, Yoon K. Multiple attribute decision making methods and applications. Berlin: Springer Verlag; Jahan, A., Mustapha F.,, Md Ismail, Y., Sapuan S.M., Bahraminasab M., A comprehensive VIKOR method for material selection, Materials and Design 32 (2011)

49 62 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 63 Automation, Security and Surveillance for a Smart City Abstract Subhadeep Datta (Author) Dept. of Mechanical Engineering Jalpaiguri Govt. Engineering College Jalpaiguri, India sahebdatta1996@gmail.com The Internet of Things (IoT) is when the Internet and network of sensors flourish to places such as home and industrial level automation, traffic control, synchronization of emergency services and surveillance over Wi-Fi. The concept of SMART CITY is an emerging field in current research since people are becoming more time conscious and with the rising population, it is becoming increasingly difficult to control things manually. This paper focus specifically on utilization of the city/campus wide Wi-Fi, existing telecommunication service, various sensors and the widely used Android Phone to develop a SMART CITY where every corner of the city is connected to a central control unit over Wi-Fi. This paper also illustrates about the working prototype of the proposed vision of the SMART CITY. Keywords : smart city, automation, ESP, security, surveillance, arduino, DTMF. 1. Introduction The internet of things (IoT) is the internet working of physical devices, vehicles, buildings and other items embedded with electronics, software, sensors, actuators, and network connectivity that enable these objects to collect and exchange data [1]. The concept of IoT is not very new. The concept of the internet of things first became popular in After that, it has gone a massive transformation. As of 2016, the vision of the internet of things has evolved due to a convergence of multiple technologies, including ubiquitous wireless communication, real-time analytics, machine learning, commodity sensors, and embedded systems [2]. With the growing concept of IoT, it has led to the development of the communication system, traffic control, school and library management, waste management, data collection from citizens, etc. These, when combined together, gives the concept of a Smart City and Home Automation. As this idea of Smart City is developing, it is getting increasingly secured. Various household devices are being connected and controlled over the internet [3]. Apart from using the internet as a platform, interactive home automation systems have been developed using Bluetooth communications, remotely controlled using android devices [4]. Other platforms includes ZigBee communication, the Ethernet and the trans-receiver modules are also sometimes used to replace the internet where local communication is the mail concern. However, these diversified solutions are not synchronized under one head to work as a single unit. In addition to the technical difficulties, the adoption of IoT is also hindered by the lack of a clear and widely accepted business model that can attract investments to promote the deployment of these technologies [5, 6]. The existing solutions, in spite of being advanced, cannot be implemented to build a Smart City since they are specific, i.e., they are implemented part wise. To design a Smart City, every part of the city should be synchronized with each other and work as a unit. In this paper, I propose a solution wherein the everyday use objects and services are connected to micro-controllers and provide user friendly platform for the users to access them. It also maintains a database in the security control center where from information can be accessed as and when necessary. This entire task is performed over the Internet wherein all the apartments and service centers of the city are synchronized and the users get a platform from where they can access them. A layout of my proposed city has been illustrated in Fig 1 The proposed concept Smart City consists of the following features:- Apartments equipped with various sensors Control of Home Appliances over Phone, using DTMF [7]; PASSWORD protected RFID [8] based entry control to apartment Emergency Services and Apartments Synchronized using ESP [9] Modules Database of activities recorded for security purpose Updates on User s Phone or Laptop (as opted) Automation of daily Activities Live Surveillance of Home without Loss of data Auto Info for servicing of Appliances sent to respective service provider The entire city is connected over Wi-Fi. Apartments and other service centers communicate using ESPs. The function of the Smart City has been categorized into THREE parts interlinked with each other. Automation of Apartment Synchronization of city services and Surveillance of the Apartments and City. 2. Automation of Apartment Home Automation is a very vast field where lots of developments has been done in the past few years and is improving dayby-day. Home automation can be defined as accessing or controlling many of our home appliances, security, climate, and video monitoring from a remote or centralized location [10]. In this paper, I have proposed some basic automation that can be done at a very low cost. The whole automation is divided into two parts: controlled by the micro-processor (fully autonomous) and controlled by the owner (semi-autonomous). The entry to the apartment requires authorization. It is done by installing Radio Frequency identification (RFID) based entry system. It matches the RFID card with predefined one allotted by the Security control center. A keypad is also installed at the entrance. The user has to first produce the RFID card and then press the pass code to enter into the apartment. The semi-autonomous part includes the control of the appliances by the owner using mobile phone. The owner can control his appliances from any corner of the world via his mobile phone. This is done using DTMF (Dual Tone Multiple Frequency) module, which is attached to a mobile phone placed back at his apartment. The module is also attached to a micro-processor which carries out the required operations. This allows the owner to access his appliances almost from everywhere from where

50 64 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 65 he can make phone calls. A major disadvantage of this system is that, anyone can get access over the appliances if he has the phone number of the phone placed at the apartment. For this purpose, the system is password protected. The owner has to first dial the phone number, then the password to unlock the system. If someone fails to unlock the system continuously for 3 times, that number gets blocked for 24 hours and a notification is sent to the registered mobile number. Additional voice assistance is also provided to the owner for easy access if he opts to. For instance, suppose the owner wants to switch on his AC 5 minutes before he enters his apartment so that he can enjoy the cool air. So he dials the number and then the password and presses the required key for switching on the AC. The fully autonomous includes some basic home automation such as: Automated switching of lights and fans according to light intensity or/and climatic condition and presence of occupants respectively. Curtain raiser - according to the intensity of light - ldr Watering the Garden - according to the moisture content of soil - SEN Switching of Pump - according to the water level of overhead tank - IR Sensor Garage door open/close - according to the distance of the owner s car - Ultrasonic Sensor Switching of lights or Security cameras - according to motion detected inside the house- PIR Sensor The automation of each apartment operates in two modes. One is the Alert mode and other the Normal mode. The Alert mode : It is activated automatically or can be activated by the owner himself. This mode is activated automatically when no one is at home. When the PIR sensor detects the absence of occupants of the house, it locks the system and triggers the Alert mode. The PIR sensors, Security cameras, door and window contact sensors remains active all time. Whenever motion is detected by the PIR sensor or the door and window contact sensors detects unusual behavior, the surveillance cameras are automatically turned to that direction. A notification is sent to the nearest police station and to the owner of the house. The Normal mode : This mode is activated automatically or can be activated by the owner. It is activated when the house has occupants. Say for instance, an apartment is set to Alert mode. Only if it detects the correct RFID tag and pass-code, and then detects the presence of occupant, this mode gets activated. Whenever window contact sensors detect unusual behavior, the surveillance cameras are automatically turned to that direction. A notification is sent to the owner of the house. The owner accordingly sends notification to police station if required. 3. Synchronization of City Services A smart city is one where every action will be automated and synchronized with each other. This not only reduces time delay but also reduces man power and prevents long waits. In such a city, emergency situation, criminal activities, etc are detected by various sensors and immediate action is taken automatically. A database of every activity is maintained for security purpose. With the advancement of technology, the security level of apartments is also increasing. Implementation of the cloud is now a common practice to take the security level to the next level. This paper proposes the synchronization of City Hospitals, Fire Stations, Police Stations and other city services with Residence Complexes and also with each other. Any emergency situation is conveyed via Wi-Fi to the respective place. These operations are carried out via the use of various sensors and actuators. The residence complexes are equipped with sensors which are connected to a micro-controller. These sensors inform the micro-controller about the surrounding. The sensor data is processed by the micro-controller and action is taken accordingly. The micro-controller of each apartment is connected to the main server via ESP module. Every data sent by the module gets recorded in a database according to its source and type. A database is maintained wherein details of all the apartments, its GPS location, owner s details, residents details, emergency contact number and various other details are saved. In case of any emergency service, the data of the respective house is accessed and displayed accordingly. Some of the sensors are namely Fire Sensor, Smoke detector, PIR sensor, GPS module, Gas detector, Emergency button and door and window contact sensors. Some of the city services are as follows: 3.1 The control center It is the central controlling office - works as database manager. It registers any new resident of the city into the database and allots a RFID card and a pass-code for his apartment. His full bio-data along with the GPS position of his house is entered in the database. The owner can change the given PIN number as and when required. Each and every apartment and service centers are registered by the control center and only the control center can make changes in the database. The registration process keeps a check on the anti-social activities. It also keeps a note of the employment, number of individuals, number of children and many other records as opted by the user. 3.2 The Police station The police station takes care of the criminal activities happening in the city. It is connected to the Wi-Fi via ESP. The ESP is connected to a micro-processor which processes the information received by the ESP and works accordingly. At first, at the time of emergency, information from the apartment is sent to the nearest police station. The micro-controller processes the information received and displays the owner s name, GPS location and contact number on a screen. It also allots a vehicle to reach the spot. The vehicle is GPS enabled and synchronized to Google Maps which shows its position and destination on a monitor in the car. If by any chance the allotted vehicle faces obstruction, a notification is sent back to the station and immediately another vehicle is allotted. 3.3 The Fire Brigade The heat and the fire sensors detect for the presence of fire or smoke. In case of emergency situation, a notification is sent to the fire brigade. The whole process is the same as in the case of police station. The fire brigade can also be automatically informed if required. 3.4 The Emergency Switch Every apartment is equipped with an Emergency Switch placed at a reachable location inside the house. This switch is synchronized with city hospitals. If any elderly person or a needy person is alone back at home and is in need of emergency, and if the button is pressed, the information is conveyed to the nearest hospital which sends an Ambulance at its earliest. Its working is similar to that of the previous two. 3.5 Company building complex Every apartment and building has certain electronics items. These items require servicing from time to time. This purpose is served by installing sensors in the appliances which from time to time uploads health status of the appliances to the database of the company. The appliances are synchronized to the Wi-Fi. The companies according to the data received, sends in servicing people of the respective apartment. This prevents delayed servicing and also increases the lifespan of appliances. These measures reduce the time delay between the happening and rescue operation. It also reduces the workload of the respective staffs since the whole process is automated. It ensures safety of the residents.

51 66 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 67 The detailed flowchart of the working of various units of the proposed Smart City has been illustrated in Fig Surveillance of Apartment and City Security is incomplete without surveillance. In addition to the increasing awareness of efficient home surveillance system, the implementation of real-time alert system is important for users. Usually, Surveillance makes use of CCTV cameras and a security control room to monitor those. In this paper, I have made use of a smart phone in place of CCTV cameras. Affording a CCTV camera for surveillance of a house is sometimes not possible. Hence, a smart phone is mounted instead of a CCTV camera. This phone is connected to the common Wi-Fi. The owner has to enter the IP address of the phone to access the video footage. He can access the video footage from anywhere till he can connect with the common Wi-Fi. This requires no active internet connection. This works only if the device placed at the home and the user s device are under the same Wi-Fi zone. The video can be viewed in smart phones as well as laptops and tablets. Hence no more need of CCTV s and LCD s to keep a track of our home or office. For the other parts of the city such as streets and malls, CCTV cameras are mounted whose video footage is under Live Surveillance. The video footage is fed to a microprocessor which processes the live videos. Initially the microprocessors are programmed with some of the common crime activities. The microprocessor constantly tallies the live video footage with the pre-programmed videos. If the match crosses a specified percentage, it reports the activity to the nearest police station. This minimizes the delay between the crime and rescue operation. 5. Conclusions Today, most of the leading Automation companies are focusing on Home and Industry Automation and Security aspects. The automation industries are developing day by day and the sensors used are also improving. The proposed a solution covers, not only houses, but the whole town or city of campus. As for today, this solution exists nowhere where every apartments and services of the town or campus will remain connected over the campus wide Wi-Fi and will constantly communicate with each other. The proposed solution is a step towards Digital India - A Smart City. References Fig 2: Flow Chart of the System 1. Internet of Things Global Standards Initiative. ITU. Retrieved 26 June I. Wigmore: Internet of Things (IoT). TechTarget, June B. VinayagaSundaram, Ramnath M., Prasanth M. and VarshaSundaram J., Encryption and hash based security in Internet of Things, Signal Processing, Communication and Networking (ICSCN), rd International Conference on, Chennai, 2015, pp D. Sunehra and M. Yeena, Implementation of interactive home automation systems based on and Bluetooth technologies, 2015 International Conference on Information Processing (ICIP), Pune, 2015, pp Andrea Zanella, Nicola Bui, Angelo Castellani, Lorenzo Vangelista, and Michele Zorzi, Internet of Things for Smart Cities, IEEE INTERNET OF THINGS JOURNAL, VOL. 1, NO. 1, FEBRUARY A. Laya, V. I. Bratu, and J. Markendahl, Who is investing in machine-tomachine communications? in Proc. 24th Eur. Reg. ITS Conf., Florence, Italy, Oct. 2013, pp Dual Tone Multi-Frequency (Touch-Tone ) Reference. Available: signaling/dtmf.html 8. RFID Reader - RS-232. Available: 9. Technical Reference. Available: en.pdf 10. R. Nicole, Title of paper with only first word capitalized, J. Name Stand. Abbrev., in press.

52 68 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 69 A Solar Refrigeration System to reduce cooling water consumption in a thermal power plant Abstract Prof. Asim Mahapatra 1, Bishal Dey 2 1 Asst. Professor, Department of Mechanical Engineering, Jalpaiguri Govt. Engineering College Jalpaiguri , West Bengal, India a.mahapatra2000@gmail.com 2 Post Graduate Scholar, Department of Mechanical Engineering, Jalpaiguri Govt. Engineering College Jalpaiguri , West Bengal, India bishaldey92@gmail.com Constantly decreasing water levels in feeder canals of power plants causes shut down of power generations for a few days in the last few years. Apart from that the Central Electricity Authority produced a report on minimizing the overall water requirement of coal based thermal power stations, and as the report tells that a major proportion of the total water requirement of the power stations is the cooling water used.[6] In this paper an attempt has been made to perform a thermodynamic study and analysis of a 250 MW thermal power plant, to reduce the mass flow of cooling water by decreasing its temperature with the help of solar powered refrigerator. So, here we are going to study an analytical mathematical model of a refrigeration system where the cooling water coming from the cooling tower enters into a solar refrigeration system before entering to the condenser of the power plant. As the temperature of the cooling water drops down because of the refrigeration system, the overall requirement of cooling water reduces. A mathematical model of amount of cooling water flow per second is made in this paper using Taguchi analysis. Keywords : Cooling Water, Thermal Power plant, Solar refrigeration, Mass flow rate, power generation, Taguchi analysis. (key words) 1. Introduction In most of the places in the world, as well as in India the thermal power plant turbine is steam-driven. Now this is a big deal to condense the steam (working fluid), because this needs a large amount of cooling water to take away that large amount of heat from steam. But the cooling water is recycled through Cooling Tower. The primary task of a cooling tower is to reject heat into the atmosphere. The make-up water source is used to replenish water lost to evaporation. Now, the amount of make-up water is also very large, though its 2-5% of the total cooling water, it is not available now-a-days through-out the year. So this is time to think about reducing the consumption of cooling water. So, in this context we are proposing to set a refrigeration system in order to cool down the cooling water further before entering the condenser. But to avoid environmental hazards and energy crisis we would like to run the refrigeration system with the help of solar energy. Now as our prime focus is to design and set up a solar powered refrigeration system, which can cool a large amount of water, we have to be specific on large scale solar refrigeration system. Before that we limit our research into the two main kind of refrigeration system (a) Vapour compression refrigeration system and (b) Vapour absorption refrigeration system. [3] As the previous research of NASA shows that heat-driven cooling systems such as absorption cycle, can also be solar powered, but their thermodynamic efficiency is not as good as the vapour compression, vapour absorption require the more complex solar collectors, and they do not scale down in size as well, which is a major draw-back. And that is why we are focusing upon VCRS. [2] 2. Model Development For a general cooling tower of a 250 MW thermal power plant, we have collected some data [1]. Now if we consider that there is no heat loss, then we can assume that the Cooling water temperature at cooling tower inlet is equals to the cooling water temp. at high temperature leaving the condenser. As well as the cooling water temperature at cooling tower outlet is equals to the temperature of low temperature cooling water going to power plant condenser. So, here we take some of these values, like- Mass flow rate of cooling water (m) = 3632 m 3 /hr. = kg/s Temp. of cooling water going to condenser = 29.4 o C & Temp. of cooling water coming out from condenser = 46.1 o C So, heat extracted from the condenser, Q = m.c p.dt, [where, m = mass flow rate of cooling water, C p = Specific heat of water at constant pressure, for water = KJ/Kg-K, dt = Temperature difference in Kelvin (K)] So, Q = ( ) KJ/s = KJ/s or KW (= 60.3 Million Kcal/hr ) So, we have got the amount of heat the cooling water extracts from the condenser. Evaporation Loss (m 3 /hr) = x 1.8 x circulation rate (m 3 /hr) x (T 1 -T 2 ) [T 1 -T 2 = Temp. difference between inlet and outlet water][1] For circulation rate= 1010 Kg/s and Temperature difference between inlet and outlet temperature = ( )K Evaporation Loss = Kg/s ; which is 2.56% of the circulating water. As we are going to set up a refrigeration plant which will cool down the cooling water temperature further after coming from cooling tower, so, if the temperature can further decreased by 1K, the cooling water mass flow rate becomes Kg/s So, the mass flow rate decreases by = Kg/s or 5.7% (approx) For reducing the temperature of 952.5Kg/s cooling water by 1K, the Refrigerating Effect or Load is KJ/s or KW Like the same way, for decreasing the cooling water temperature further by 2K, 3K, 4K and 5K, where the mass flow rate without refrigeration is 1010 Kg/s, we get Table : Reduction of mass flow of cooling water with decreasing temperature- 1K 2K 3K 4K 5K Mass flow rate(m) in(kg/s) Reduction of mass flow rate(kg/s) % reduction of mass flow rate Refrigerating Effect or load (RE) in (KW) Now, we are going to focus on the refrigeration plant to be used. For this operation we have chosen Vapour Compression Refrigeration System and Ammonia (NH3) as the working fluid or Refrigerant of this refrigeration system. [5] However the parameters of this refrigeration plant we take similar with the typical Cold Storages that work on the same VCRS and Ammonia as refrigerant.

53 70 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 71 The parameters are, Evaporator temperature = -14 o C, Condenser Temperature =30 o C, Pressure at Compressor Outlet = 160 psi From the Ammonia table, we get the Enthalpyh1 = 1427 KJ/Kg, h 2 = 1600 KJ/Kg, h f3 = h 4 = KJ/Kg So, Refrigerating Effect of per Kg of refrigerant = ( ) = KJ/Kg For reducing 1K temperature, Load = KJ/s 3. Regression Analysis Regression analysis is a statistical tool which provides us relationship between response and predictor variables. By regression analysis we can tell how response variable changes with change of the predictor variables. Simple regression equation is Y= a+bx. As we have to consider more than one predictor variable, so simple regression analysis cannot be used. In this problem multiple regression analysis was adopted to obtain the mathematical model. Temperature drop from the inlet to outlet of the refrigerator (dt), Compressor power input(w) & COP are control factors, final equation looks like m= β 0 +β 1 dt +β 2 W + β 3 COP + β 4 (dt*dt) + β 6 (W*W) + β 6 (COP*COP) + β 7 (dt*w) + β 8 (W*COP) +β 9 (COP*dT) This equation can be solved by following matrix method [β] = [Y]*[X]-1, Where, [β] = Coefficients matrix; [Y] = Response variable matrix; [X] -1 = inverse of control variables matrix; In this problem there are three variables with three different range or level, so Taguchi Orthogonal Array (OA) L9 design matrix table is used to do the theoretical experiment. But L9 Orthogonal table gives us nine equation and no of unknown coefficients are ten (i.e. - [X] = [10*9]). It is not possible to solve these sets of equation in above said matrix method. To solve this problem we did one extra experiment which we took from L18 Orthogonal array. Now we can easily solve this equation in the above said matrix method because no of equation is equal to know of unknown co-efficient (i.e. [X] = [10*10]). Control Factors & their ranges- Three control factors and their range or level are shown in below- Table : Power Requirement for each K temperature drop is- was done according to the L9 Orthogonal array and one experiment was done according to L18 Orthogonal array. Then solved these ten sets of equation in MATLAB by matrix method and got the required values of co-efficient. The proposed mathematical model from theoretical experiment: m = dt - 12 W COP 197(dT dt) 0.00(W W) 1459(COP COP) 1 (dt W) + 3(W COP) + 199(COP dt) RESULT & ANALYSIS Above regression equation was simulated in computer to find out the effect of control factors (dt, W, COP) on change in mass flow rate of cooling water (m) within the considered range or levels. This is done by using a computer C-Programming which gives the responses for each input and it s done by varying one parameter at a time within the considered range while keeping other two parameters constant at a particular value. Variation of mass flow rate (m) on temperature drop(dt) while other two parameters, Compressor power input & COP were kept constant at 1000KW & 6.4 respectively. mass flow rate (m) in Kg/s temperature drop (dt) in K This figure shows that mass flow rate decreases with increasing temperature difference. Variation of mass flow rate (m) on Compressor power input(w) while other two parameters, temperature drop(dt) & COP were kept constant at 2K & 6.4 respectively.a mass flow rate (m) in Kg/s This figure shows that larger mass can be cooled with greater power input in compressor. In this study, values of control parameters or factors (dt, W, COP) were put and corresponding m was calculated from the equation, where, C p = Specific heat at constant pressure of Water= KJ/Kg-K. Nine theoretical experiment W in KW

54 72 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 73 Variation of mass flow rate (m) on COP while other two parameters, temperature drop (dt) & compressor power input(w) were kept constant at 2K & 1000W respectively. Presence of employability related nontechnical competencies in the existing hospitality curriculum in West Bengal: A Study m This figure shows mass flow rate first increases with increasing COP and then decreases after 6.4 Santanu Dasgupta, Head 1, Deeptiman Bhattacharya, Assistant Prof 2 1,2 Department of Hotel Management, Siliguri Institute of Technology, Salbari P.O. Sukna, Dist Darjeeling, West Bengal , India 1 santanu_dg@rediffmail.com , 2 deeptiman_sit@rediffmail.com Abstract Presence of employability related nontechnical competencies in the existing hospitality curriculum in West Bengal: A Study COP 4. Conclusions In this study Taguchi methods of Design of Experiment (DOE) was used to examine how Mass flow rate of cooling water (m) in an steady flow refrigeration process of a VCRS vary with temperature drop, compressor power input & COP. Following conclusion can be made from the analysis of results obtained from the mathematical model, we came to know the optimum values of the control factors to decrease the mass flow rate of cooling water. And this can be a one-time investment process, even the maintenance cost is very low. But the power generation will never get hampered due to scarcity of cooling water. Finally, we would like to express our heartfelt gratitude to SRIRAM COLD STORAGE, Jalpaiguri, for information and help. We would also like to take this opportunity to extend our thanks to Prof. Dr. S.MUKHERJEE H.O.D., Department of Mechanical Engineering, JGEC, for his cooperation and insightful comments. And, we would like to thank our all the well wishers who helped us to make our work more organized and well-stacked till the end. References 1. Bureau of Energy Efficiency ( 2. Solar Powered Refrigeration System, By NASA. 3. Solar Refrigeration : Current Status and Future Trends, By Dr. A. Mani. 4. A textbook of Refrigeration and Air Conditioning, By R. S. Khurmi and J. K. Gupta 5. Simulation and Comparison of the Performance of Refrigerant Fluids in single Vapour Compression Refrigeration System, By T. S. Mogaji 6. Report on minimisation of Water Requirement in Coal Based Thermal Power Stations, From Central Electricity Authority Hospitality curriculum has been designed to offer the students, a clear pathway towards the dynamic hospitality arena where everyday new challenges are to be met. As per World Travel and Trade Council (WTTC) and ITDC, a huge requirement of skilled professionals is there, to fill up the upcoming gaps in Hospitality. The hospitality industry in India is now full of scope for students of the various Hotel and Hospitality Management colleges across the country. This can be understood with the opening of many privately owned Hotel Management colleges and private universities offering hospitality courses, not only in the state but also across the whole country. With a global boom in the hospitality industry worldwide, and with India in the limelight, many aspirants have focused their career towards this option to become prospective hotelier. To meet with the growing demands of the ever changing industry, the prospective hoteliers have to be equipped with the hospitality employability skills. Enhancing employability skills is considered to be the most important challenge for all universities, colleges and institutes as maintaining a balance between course curriculum and classes on employability skills seems to be a problem. The purpose of this study is to scrutinize the existing colleges and universities in West Bengal offering Hotel Management degree program and their effectiveness in the development of employability skills among the hospitality students. Keywords : employability skills, hospitality curriculum, hospitality industry, prospective hotelier 1. Introduction The sense of hospitality is present in the tradition of Indian culture. We all are familiar with the term Atithi Devo Bhava which means Guest is to be treated like God. As this phenomenal and ancient phrase has been derived from the Vedas, written more than 5000 years ago, we can definitely claim that hospitality is in the blood of every Indian. From Kashmir to Kanyakumari the Indian subcontinent offers the most diversified ethnic varsity. With the help of campaign initiatives being under taken by the government like Incredible India and with famous celebrities being roped in to create awareness, a huge attraction among the foreign nationals about our motherland has been created. India which was more of an agricultural economy earlier has gradually transformed into an Industrial economy with the passage of time. Today s modern India is more inclined towards service industry specially like IT, Tourism and the Hospitality with huge young manpower, advanced technology, infrastructure and internationalization of the market etc. Out of these three the Tourism and Hospitality industry has been showing tremendous growth and was one of the few industries which showed very marginal slow down in the great market depression few years back. According to the CII report of 2012, the contribution of the travel and tourism sector in India to Gross Domestic Product is estimated to rise from 8.6% (USD117.9 billion) in 2010 to 9.0% (USD billion) by Between 2010 and 2019 the demand for Travel and Tourism in India is expected to grow annually by 8.2%. United Nations World Tourism Organization (UNWTO) has forecasted that Travel & Tourism Industry in India will grow at approximately

55 74 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 75 8% per annum, between 2008 and The foreign exchange earnings from tourism could grow at a rate of 14% during the same period. The present government in the state has recognized the importance of tourism as a tool for generating revenue and is trying to use it to its maximum possible potential. As per rating agency ICRA, India s hotel industry revenues are likely to improve by 9-10 per cent in The reason for the upward movement will be due to improved occupancy. The premium category hotels (five star, five star deluxe) will enjoy an increase of 8 per cent in room inventory for , compared to 4 per cent during As can be seen, many major international hospitality brands like Marriott, Accor, Inter Continental, Sheraton etc have shown great interest in India in the last decade. The brands have understood the innate nature of hospitality in India and are leaving no stone unturned to capitalize in on it. After making their presence felt in the metropolitan cities of the nation these brands have now turned their eyes towards the smaller cities. For example, as reported, Siliguri in West Bengal is about see its first five star hotel opening by Hospitality related education institutions have also seen a steep rise in numbers in the last few years with the concentration of them being located in and around Kolkata, the state capital. Other cities like Durgapur, Asansol have also seen such trends which is a good sign as the industry is demanding quality students with high employability skills and moral values. 2. Hospitality Education Hospitality education in India started way back in The pioneer of hospitality education in India is the National Council for Hotel Management and Catering Technology (NCHMCT) Institute of Hotel Management (IHM) under NCHMCT is run by the central government. The foundation of Indian hospitality education was laid in Bombay (presently Mumbai). Mostly all state capitals of India have one IHM. Kolkata IHM, West Bengal is one of the oldest IHMs of India. The state technical university of West Bengal - Maulana Abul Kalam Azad University of Technology (MAKAUT) offers two programs in Hospitality namely Bachelor in Hotel Management and Catering Technology (approved by AICTE) and Bachelor in Hospitality Management. There are three colleges in West Bengal offering Bachelor in Hotel Management and Catering Technology under AICTE while there are twenty one Hospitality Management colleges. The private universities offering Hotel Management program in West Bengal is Amity University and JIS University. There are numerous small institutions in West Bengal who are running bachelor degree in hospitality under distance education mode and even self awarded diploma. The central govt. skills development centre (under Hunar se Rozgar tak scheme) has added many centers offering short term course. Unfortunately these small institutions are lacking basic infrastructure and are unable to produce good market ready professionals. But still with the abundance of job due to the phenomenal growth of the hospitality and tourism, mostly students need not have to worry about placement. 3. Employability skills Employability relates to the ability to be in employment (Belt, 2010). It also increases the chance of an individual to sustain his job through a set of characteristics. If we try to explain with depth then we may also include the sustenance in job and the development in career. The employability skills of a professional will change with the change in responsibility in his career. Employability skills may be defined as the skills almost everyone needs to do almost any job (UK Commission, 2009b, p.10). The business of any organisation depends upon how the employees are perceived by the customers in terms of personality, behaviour etc. (Belt, 2010). So, employability skills are getting more highlighted than yester years. As we say that charity begins at home, similarly, employability skills development should be incepted at the graduate education level. The concept of employability has been in literature for many years. Employability may be defined as an ability to be employed i.e. a) ability to gain initial employment b) ability to maintain employment c) ability to obtain new employment if required. Employability in other words is about being capable of getting and keeping full filing work. Individuals employability assets comprise of basic skills, occupation specific skills and involvement skills. Employability is not just vocational and academic skills, an individual requires relevant and updated market information to take the right decision (Hillage and Pollard, 1998). UK Commission for Employment and Skills report (The Employability Challenge, 2009) indicates that successful employability programmes should include: Experimental action learning Work experience Reflection and integration The Business Council of Australia (BCA) and the Australian Chamber of Commerce and Industry (ACCI) in association with some top hospitality recruiters created the Employability Skills for the Future report. As per the report the top eight employability skills required by the hospitality professionals are: Communication Teamwork Problem-solving Initiative and enterprise Planning and organising Self-management Learning Technology Macleod and Hughes (2005) commented in review that Employability is not only about employment preparation, but The transfer into employment from education and training The progression in employment Adapting to change The transfer of skills from one setting to another Development of skills at work, and Transition from periods of unemployment to employment. Kavita and Sharma (2011) pointed out that only half of the total hospitality colleges provide education to the students that add to their professional skills. The hospitality leaders demand for some appropriate management skills as well as some set competencies in the students graduating from hospitality colleges. As per National Network of Business and Industry Association the employability skills are broadly divided into personal skills, people skills, applied knowledge and workplace skills. The skills are very much interconnected and inter dependent. According to Keep et al., skills shortages refer to situations that remain unfilled as employees fail to meet the required levels and types of skills. This results in low paid jobs and long working hours. It is true that different organisations have different cultures, but the employability skills requirement remains at the top priority list. The term employability skills refers to those skills required to acquire and retain a job. Earlier, employability skills were job-specific in nature. They were not taken as a part of academics. Presently, the definition of employability has been extended beyond job specific areas to variety of attitudes and habits. \(Padmini 2012). Mr. Manish Sabharwal, Chairman of Team Lease Services at CII Global Summit held in New Delhi on September 2008 said, Success comes from three Es, they are Education, Employability and Employment. McQuaid et al. (2005) in their paper introducing employability explained the term employability from various concepts illustrated by different researchers for over 100 years. They specially argued about the narrow and broad perspectives of the approach to employability. In dealing with different facets of employability policies range from those seeking to improve employability skills and attributes, to helping the job search process, influencing personal circumstances or dealing with aspects of labour demand. An important facet of employability relates to the job search process. Dependency on higher education qualification and associated transferable skills and competencies has increased in recent years. A decline in traditional occupation has left many unemployed. Geographical location has a very important role in creating access to

56 76 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 77 employment and training opportunities. Policies are required to enhance the mobility of disadvantaged people in the labour market to make them more experienced and confident. Communication technologies particularly internet should be utilised to improve the job search provision among the unemployed people. Policies to link local labour demand to training including local partnership is required to meet the difficulties of unemployment. Fig 1: Determinant of employability Source: Hogan Robert et al. (2013) Employability and Career Success: Bridging the Gap between Theory and Reality Hospitality Curriculum Hospitality curriculum has been specially designed to equip the aspiring hospitality graduates to get job in the hospitality sector especially hotels. Generally, after completion of hotel management graduation degree, the students are placed in star category hotels. In the hospitality industry, the major departments, where a fresher Hospitality Management Graduate is generally placed are: Front Office looks after selling of guest rooms, guest registration and providing information collection of payment during departure Housekeeping - Looks after the upkeep, beautification and maintenance of a hotel. Also responsible for the smooth and comfortable stay of the guest Food and Beverage Takes order and serves food and drinks to the guest in the different food and beverage outlets. Food Production Prepares food for guests with proper hygiene standards. Sales Looks after sale of rooms, banquet parties and other facilities offered by the hotel Among the above mentioned departments, the guest interaction is highly expected in Front Office (reception counter), Sales (always in touch with the new customers) and Food and Beverage Service (food and drink outlets). The other two departments like Housekeeping and Food Production are termed as back area jobs and are in less interaction with the guests in comparison to Front Office, Food & Beverage Service and Sales. Generally it has been observed that students with good interactive skills, smart features are selected for the high interaction areas. Fig 2: Employability of candidates across different profiles in hotel Hirable: Students who can be hired directly for a job. Defined on the basis of their aptitude and behavioral competencies. Trainable: A student who is eligible to join a department of his choice after a training provided by the hotel. Defined on the basis of their aptitude and behavioural competencies. Employable: Students who are either hirable OR trainable. As mostly all hospitality companies offer training to the new joiners, so the trainable candidates may also be termed as Employable Source: The table below is a comparative study of IHMs vs rest colleges. Since IHMs are run by the central government and the brand value is there with the fees structure comparatively low, the students generally target them first. So it may be assumed that the better students are found in the IHMs. So IHMs lead the show regarding employability. The table below illustrates the employability of the students in the different departments across the hotel.

57 78 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 79 The JIS University Hospitality management syllabus matches more or less with the MAKAUT syllabus and nothing new or unique have been observed. Similar topics like attributes of a food and beverage service personnel, handling guest complain are there in the syllabus. AICTE syllabus has Personality Development as a practical paper in the sixth and seventh semester in addition to the relatively small portions on guest handling techniques scattered in the Front Office and Food & Beverage Service syllabus. Although the personality development practical papers may enhance the employability skills of a student, but the real essence of actual development is hardly witnessed as there is no clear syllabus which may be followed. So the process cannot be standardized. The Amity University Kolkata syllabus is unique in the way where a student is given an option to take up a specialization outside the regular program. This enables a student to avail job options outside his or her degree. The syllabus also offers paper like Understanding Self for Effectiveness, Problem Solving and Creative Thinking, Effective Listening, Group Dynamics and Team Building, Presentation Skills in open electives in their four years bachelor degree program of Hotel Management and Catering Technology. But, the option of open electives allows a student with liberty to avoid the subject during the curriculum. Fig 3: Employability in IHMs vs. Rest of the Colleges Source: Industrial exposure training is an integral part of hotel management curriculum. The students are sent by the college for training at a hotel (preferably five star properties) to get acquainted with real life situation. The training is mandatory for all the students as this learning is beyond the class rooms and the laboratories. The training is generally set at the mid of the program, so that by the time a student is ready to be placed in the industry, he or she should have gained some real time experience. A student learns some employability skills during the training, but is very obvious that an undergoing graduate student who is a novice cannot be placed in key positions of guest handling. Any property cannot take the risk of losing customers by improper way of service. As a result the students are mostly placed in the back area of the main departments where they hardly gain any solid exposure. The hospitality curriculum followed in different reputed colleges and universities in West Bengal can be considered as miniature of pan Indian hospitality management syllabus as the MAKAUT AICTE syllabus is followed in all AICTE approved colleges in India. NCHMCT also offer same syllabus for all IHM s. The front office syllabus of NCHMCT has sections like how to welcome a guest taught in the first semester. Social skills like handling guest complain, telephone etiquette, dining and services etiquette are a part of the second semester syllabus of Food & Beverage Service. Situation handling and case studies are a part of the front office syllabus in the third semester. Business communication has been included in the theory syllabus but the application in practical syllabus is missing. Bachelor in Hospitality Management syllabus of MAKAUT has few small topics like Attributes of Food and Beverage Service personnel; handling guest complains etc which are very department related topics. Business communication syllabus is more theory oriented. Practical syllabus contains only group discussion and extempore practices. Conclusion A broader concept of employability allows the additional consideration of vital demand, personal circumstances and other factors that influence the employability in a particular labour market, at a particular time. Employability is a two-sided equation and many individuals need various forms of support to overcome the physical and mental barriers to learning and development (i.e. updating their assets). Employability is not just about vocational and academic skills. Individuals need relevant and usable labour market information to help them make informed decisions about the labour market options available to them. They may also need support to realise when such information would be useful and to interpret that information and turn it into intelligence. People need the opportunities to do things differently, to access relevant training and most crucially employment. A full hearted approach towards the enhancement of Employability Skills is need of the hour for the hospitality students to increase the possibility of employment of the students. More stress to be given on topics like communication, IT skills, interpersonal skills, leadership skills, learning ability etc. to ensure more employable and marketable students. Suggestions Regular review of the syllabus is required by the academic council having members from the industry. Inclusion of chapters on enhancing employability skills in every semester Pan India uniformity of hospitality management undergraduate program syllabus Take experts view and implement them while modifying syllabus References 1. Amity University Kolkata syllabus 2. Belt, V. and Richardson R. (2010) Employability Skills: A Research and Policy Briefing Research and Policy Directorate of the UK Commission for Employment and Skills 3. Deidre Macleod and Maria Hughes (2005) What we know about working with employers a synthesis of LSDA work on employer engagement 4. Hillage, J. and Pollard, E. (1999) Employability : Developing a framework for policy analysis. Institute of Employment Studies (IES) in Labour Market Trends. London

58 80 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) indianhotelindustryrevenuestogrow910infy17/articleshow/ accessed on 14/01/ accessed on 17/01/ accessed on 24/01/ accessed on 17/01/ JIS University syllabus 11. Kavita KM and Sharma Priyanka (2011) Gap analysis of skills provided in hotel management education with respect to skills required in the hospitality industry: The Indian Scenario International Journal of Hospitality & Tourism Systems Volume 4 Issue1 12. Keep E, Mayhew K, Payne J (2006) From skills revolution to productivity miracle - not as easy as it sounds? Oxford Rev Econ Policy 22: McQuaid et al. (2005) Introducing Employability Urban Studies, Vol. 42, No. 2, National Network of Business and Industry Association (2014) Common Employability Skills a Foundation for Success in the Workplace: The Skills All Employees Need, No Matter Where They Work 15. Padmini. I (2012) Education Vs Employability- the Need to Bridge the Skills Gap among the Engineering and Management Graduates in Andhra Pradesh IJMBS Vol. 2, Issue 3, July - Sept Singh, Ajeet Kr.(2012) Empirical study of hospitality management education on employability or entrepreneurship in delhi and nct in hospitality AIMA Journal of Management & Research, Volume Tseng CY, Chen LC (2014) Employability and Employment in the Hotel Industry: A Review of the Literature Bus Eco J 5:105. doi: / UK Commission for Employment and Skills (2009b) The Employability Challenge, UK Commission for Employment and Skills, Wath upon Dearne: Abstract Transmission Line Cost Allocation by Orthogonal Projection Sk Mafizul Islam Department of Electrical Engineering Jalpaiguri Government Engineering College Jalpaiguri, W.B., India mafeezulislam@gmail.com This paper presents a circuit theory-based method for transmission line cost allocation for deregulated electricity markets using orthogonal projection. Here orthogonal current projection of each generator (load) acting alone on resultant current through a line is calculated and share of a generator(load) on the current through that line is determined. Hence transmission line cost allocation is obtained. The method gives clear explanation of the obtained cost allocation. The method is superior to other methods. Keywords : Circuit theory, network usage, orthogonal projection, pool electricity market, power flow, transmission cost allocation Introduction Under deregulation transmission network usage cost is required to be recovered from network users i.e. private generation companies (Gencos) and Private Distribution Companies (Discos). Significant proposals available in the literature on transmission network cost allocation among generators and demands are as follows. Pro-rata method [1, 2] allocates both loads and generators at flat rate per MW-hour irrespective of the location of generation or load bus. Kirschen et al[3] and Bialek[4] proposed methods using the proportional sharing principle, which states that any real power flow going out of a bus is proportionally shared between the flows entering into that bus, to satisfy Kirchhoff s current law. Lima[5] proposed one flow-based method to estimate line-usage by generators and demands and charge them accordingly. Gil et al[6] and Galiana et al[7] proposed usage based method where each generation is proportionally assigned a fraction of each demand and conversely each demand is proportionally assigned a fraction of each generation, so as to satisfy Kirchhoff s both laws. W. Y. Ng[8] proposed a method based on generation shift distribution factors which is dependent on choice of the slack bus hence, cost allocation is controversial. Conejo et al[9] proposed Zbus matrix[1] based network cost allocation technique. But for the transmission network having low value of shunt admittance, this method fails. Methods based on circuit theories are well accepted as they use network characteristics. For circuit theory based methods, branch current is to be expressed in terms of current injection at generation bus or load bus[10] and for that the selection of equivalence for power injections is to be done at first. Wang et al[10] have shown using two radial networks that modeling loads as equivalent admittances and generators as current injections and conversely modeling generation as equivalent admittances and loads as current injections is better than modeling all generations and loads as injections. For cost allocation Conejo et al[9] expressed real power outgoing through a line in terms of Z BUS elements and bus current injection. When real power outgoing through a line from at an end bus where it is positive then they name it as cost allocation by Z BUS method. When real power outgoing through a line from at an end bus where it is negative then they name it as cost allocation by Zbus backward method. Unfortunately Z BUS and Z BUS backward cost allocation differs for all the lines to a great amount. Then getting average of line cost allocations by Z BUS and Z BUS backward methods they got an average cost allocation name by Z BUS avg cost allocation which has no mathematical justification.

59 82 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 83 Wang et al[10] have presented orthogonal projection technique to calculate share of a generator on a branch current using CE mode of equivalence. Here instead of actual current in complex form due to one generator through a particular branch, if orthogonal projection of it on the resultant branch current is calculated then the share calculation is more accurate and is independent on choice of slack bus. Conversely current through each branch for each load and its orthogonal projection on the branch current is calculated. As power transmission through line benefits both the generators and loads so half of the line loss is allocated to loads and half to generators[10]. They have successfully used this method for transmission loss allocation. In this paper network cost allocation based on orthogonal projection is proposed. For the transmission network having low value of shunt element or no shunt element this method works well. This method does not depend on choice of slack bus. An important feature of the proposed method is it s embedded proximity effect, which shoes that a generator(demand) uses mostly the lines which is electrically close to it. This is not imposed rather comes out as a result of relying on orthogonal projection and circuit theory. Other methods need important assumptions, which reduce their practical interest. Applying pro rata criterion implies disregarding generator(load) locations in the network and using proportional sharing principle implies imposing the principle, and using Z BUS avg method has no justification. On the contrary, the proposed method simply relies on orthogonal projection and circuit theories. If slack bus is changed then angles of buses change but angle of does not change. Then the ratio of orthogonal current projection due to a generator or load on the branch current also remains same and is real number. Thus this method is not dependent on choice of slack bus. Conversely, representing all injections at generation buses as equivalent shunt admittance and injections at each load bus as current injections, orthogonal projection of each load bus current injection on each branch current can be calculated. 3. Proposed technique As power transmission through line benefits both the generators and loads, so half of the line cost is allocated to generator and rest half to loads and this is well accepted by researchers and network participants[10]. This method is named as cost allocation by Orthogonal Projection method in brief Ortho method. Thus line cost is allocated in proportional to orthogonal projection component of a generator or load to the net line current as discussed in section 2.2 and equation 4. Cost allocation of line-r to generation bus-k is given by the following formula Related circuit theories Equivalence mode Consider an n-bus power system for which solved power flow exists. Let us define S -complex bus power injection vector, V -complex bus voltage vector and I -complex bus current vector. All buses are distinguished at first. A bus is considered as a generator bus if it s real power injection is positive otherwise it is a load bus. Then the generators are converted to current injections and loads to admittances[10,11]. Equivalent admittance matrix (Yd) for loads is added to the bus admittance matrix (Y bus ) to get new admittance matrix (YG). For all buses, k=1 to nbus 4. System Studies Then the new bus admittance matrix(y G ) is inverted to have the bus impedance matrix(z G ) which is required to calculate current through any branch due to a generator[10,11]. 2.2 Current projection component Orthogonal current projection technique[10] is used here. Let, current through r-th branch due to all injections at different Gi Gi Gi buses and due to injection at i-th generator bus are I r and I r respectively, f and ɵ r are the phase angles I r r of I r and respectively Gi and l denotes the orthogonal projection vector of on and is expressed as I rp Fig 1: Four bus system with bus power injection and line flows The working of the orthogonal projection method for transmission cost allocation is illustrated by a four-bus system shown in Fig 1 Note that all buses are similar in terms of demand / generation and five lines in the power system have the same values of series resistance, reactance and shunt admittance as , and p.u. respectively. Fig 1 shows the real power generation and demand at each bus and the real power flow through the lines.

60 84 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 85 Conejo et al[9] have shown that their method is superior to other methods. Here the results obtained are compared with those obtained using Conejo s methods, namely, Z BUS [9] and the Z BUS avg [9]. Cost allocation of each line and the entire network are shown in table 1 and 2. G1 stands for Generation bus located at bus 1 and L3 stands for Load bus located at bus 3 5. Conclusions It can be concluded that, the cost allocation of a particular power system obtained by the orthogonal projection method is better than Z BUS and Z BUS avg cost allocation techniques. Here cost allocation is well established by circuit theory and orthogonal projection principle and it is independent on choice of slack bus. It can work well for systems having no or low value of shunt elements. The branch and network cost allocation meets the physical condition also. This has been shown by a small power system network. Cost allocated to lines are positive and negative is more preferable and physically realizable and it is compensated for the network cost allocation. Table 1: Cost allocation of line 1,5, 2 and 4 in $/hr Table 2: Cost allocation of line 3and the network in $/hr Line 1 and line 5 have similar operating conditions. Line 1 carries 60MW from a generator bus G2 to another generation bus G1. This power is not meant for bus G1 rather this power is for load bus L3 and L4. So cost allocation of line 1 to G1 is negative, allocation to G2 is highest and to buses L3 and L4 are moderate by orthogonal projection method. Whereas Z BUS avg method allocates G1 also high cost of line 1 which is unjustified and Z BUS method allocates no cost to L3 which is also unjustified. Similarly, line 5 carries 63MW from a load bus L4 to another load bus L3. This power is not serving the purpose of L4 rather this power is for load bus L3. So cost allocation of line 5 to L4 is negative, allocation to L3 is highest and to generation buses G2 second highest and then to G1 moderate by orthogonal projection method, which is quiet acceptable. Whereas Z BUS avg and Z BUS both methods allocate L4 second highest cost of line 5 which is unjustified. Line 2 and line 4 have also similar operating conditions. Line 2 carries 191.7MW from a generator bus G1 to a load bus L3. So cost allocation of line 2 to G1 and L3 are high by all the methods. Line 4 carries 190.0MW from a generator bus G2 to a load bus L4. So cost allocation of line 4 to G2 and L4 are high and to G1 and L3 are low by Ortho and Z BUS avg methods. But by Z BUS method allocation to bus L3 is zero without justification. References 1. A. J. Conejo, F. D. Galiana, and I. Kockar, Z-bus loss allocation, IEEE Trans. Power Syst., vol. 16, no. 1, pp , Feb D. S. Kirschen and G. Strbac, Fundamentals of Power System Economics. Chichester, U.K.: Wiley, D. S. Kirschen, R. N. Allan, and G. Strbac, Contributions of individual generators to loads and flows, IEEE Trans. Power Syst., vol. 12, no. 1, pp.52 60, Feb J. Bialek, Topological generation and load distribution factors for supplement charge allocation in transmission open access, IEEE Trans. Power Syst., vol. 12, no. 3, pp , Aug J. W. M. Lima, Allocation of transmission fixed rates: An overview, IEEE Trans. Power Syst., vol. 11, no. 3, pp , Aug H. A. Gil, F. D. Galiana, and A. J. Conejo, Multiarea transmission network cost allocation, IEEE Trans. Power Syst., vol. 20, no. 3, pp , Aug F. D. Galiana, A. J. Conejo, and H. A. Gil, Transmission network cost allocation based on equivalent bilateral exchanges, IEEE Trans. Power Syst., vol. 18, no. 4, pp , Nov W. Y. Ng, Generalized generation distribution factors for power system security evaluations, IEEE Trans. Power App. Syst., vol. PAS-100, pp , Mar A. J. Conejo, J. Contreras, D. A. Lima, A. P Feltrin, Zbus transmission network cost allocation, IEEE Trans. Power Syst., vol. 22, no. 1, pp , Feb H. X. Wang, R. Liu, W. D. Li, Transmission loss allocation based on circuit theories and orthogonal projection, IEEE Trans. Power Syst., vol. 24, no. 2, pp , May J S Daniel, R S Salgado and M R Irving, Transmission loss allocation through a modified Y-bus, Proc. Inst. Elect. Eng., Gen., Transm., Distrib., Mar. 2005, vol. 152, no. 2, pp Line 3 is connected between G1 and L4. Line 5 cost allocations to G1& L4 are high and to G2 & L3 are low by all the methods. In Table 2 network cost allocation to buses are provided. G1 and L4 have similar position and connectivity to rest of the network and power injections are nearly equal. So costs allocated to them should be nearly equal which is successfully done by Ortho and Z BUS avg methods Similarly G2 and L3 have similar position and connectivity to rest of the network and power injection are equal and hence costs allocated to them should be nearly equal which is successfully done by Ortho and Z BUS avg methods. But for both these cases Z BUS method fails. As Z BUS avg cost allocation has no mathematical justification in circuit point of view and though Z BUS is mathematically justified but its cost allocation is not acceptable from practical point of view by both researcher and network participant. Hence the proposed method is superior to existing methods.

61 86 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 87 Charge Simulation Method Based Earthing Grid Design as an Alternative to Conventional Approaches Abstract 1 Debojyoti Mondal, 1 Sumanta Dey, 1 Anik Manik, 1 Bapi Das, 1 Amit Kumar Mandal, 1 Abhisek Roy, 2 Dr. Santanu Das 1 Final Year Students, EE Dept., Jalpaiguri Govt. Engg. College Jalpaiguri,India 2 Associate Professor, EE Dept., Jalpaiguri Govt. Engg. College Jalpaiguri, India This paper proposes Charge Simulation Method (CSM) based earthing grid design approach as an alternative to the conventional design methods. Substation earthing system is necessary to secure protection of people working in the vicinity of substation and equipments against danger of huge amount of fault currents. Design of earthing grid in various ac substations requires the calculation of the Ground Potential Rise (GPR), Step Potential, Touch Potential and Mesh Potential and keeping them within their safe limits. Earthing grid design has been successfully and effectively carried out by the conventional method as proposed in IEEE Guide for Safety in AC Substation Grounding. In this paper, an alternative approach involving charge simulation method has been proposed to design earthing grid. Algorithm of CSM based earthing grid design technique has been implemented in MATLAB environment. Various design criteria which includes the fault current duration, current division factor, soil resistivity and the grid parameters etc. were considered to compute the values of various design variables at the final stage of the design, and their values are ensured to be within the permissible limits. Results obtained from proposed CSM based earthing grid design method were compared with the results of conventional method and found to be reasonably close to each other. Keywords : Earthing grids; Ground Potential Rise; Mesh Voltage; Step voltage; Touch voltage and Charge Simulation Method 2. Charge Simulation Method According to charge simulation method (CSM), the potential at the i th contour point can be found by summing the potentials resulting from the individual fictitious discrete charges (Q j ) as long as the point does not reside on any one of the charges. The discrete charges can be either point, line or circular or any other configuration. As in Fig 1, the fictitious charges considered here are point charges. The magnitudes of the point charges can be known from equation 1. The positions of the point charges and contour points are given in rectangular coordinate system. The respective distances between the contour(evaluation) points and the charge points are calculated as follow: Where, X j, Y j and Z j are the coordinates of the fictitious point charges and X i, Y i and Z i are the coordinates of the contour points. After calculating the distances between all the charge points and the contour points the P ij matrix is calculated and hence the charge matrix Q j is determined from the equation 1. The potential at any point xx outside the electrodes as shown in Fig 1 can be calculated again by using the following equation: where, φ xx is the potential matrix at the respective contour points, P xxj is the potential coefficient matrix and Q j is the charge matrix. Now as per the definitions of step voltage, touch voltage, mesh voltage given in IEEE standards [1] from the ESP graph generated by the CSM program it is possible to calculate the respective values and compare them with the maximum tolerable limits and determine whether our design is safe. 1. Introduction A safe grounding design is essential for dissipation of huge amounts of current deep into earth under normal and fault conditions for safety of the persons working there and proper functioning of the devices. This is done by comparing the step and touch voltage of the person standing on the ground under which the earthing grid is located with the tolerable values. Proper design of the grid for any power plant or substation is necessary for the safety of the personnel and the sophisticated machines working there. According to the IEEE standards if the calculated step and mesh voltages are less than the tolerable values of step and mesh voltages the design is safe and the desired safety limit is achieved. In this paper, the design solutions given by IEEE by the conventional method has been compared with the solutions provided by numerical computation methods developed in MATLAB implying theoretical concepts of charge simulation method. As the results are quite similar to the results given in example for a square grid in annex B (page-129) of IEEE Guide for Safety in AC Substation Grounding, it seems that the design solutions presented in the proposed method is appropriate and efficient and can be used for future design analysis to generate quite accurate results. Therefore, the proposed approach will enhance the way of earthing grid design as now the time consumed for the design solutions will be much less than the conventional methods. Detailed description of the evaluation and conceptual procedures has been presented in this paper. Computerized simulation of earthing grids became popular mainly because of the high accuracy, speed and flexibility afforded by the use of computers. The validity of the charge simulation method is explained by comparison between its results with the conventional method in IEEE standard [1] that is used to calculate the earth surface potential. d -distance of contour point from charge point d -distance of contour point from image charge. Fig 1: Illustration of the charge simulation method to calculate Earth Surface Potential Fig 2: Illustration of the grounding grid, step and touch voltage.

62 88 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Illustration of Grounding System A grounding grid configuration is shown in Fig 2 Various information and parameters required for designing of the grids can be found in the IEEE standard. Considering all design parameters and specifications for the design, each step in the conventional method were carried out through rigorous in hand calculations by the respective formula given in the IEEE guide. The major disadvantage of this method is that it is a very time consuming process. Firstly, for the given substation site, field data, such as, soil characteristics of the area, the substation capacity, conductor resistance, fault current magnitude etc. are collected to design a square grid. Then, subsequent steps mentioned in the IEEE guide book were carried out and the results obtained at the final stage were checked. If results are found unsatisfactory, then, the entire design is repeated, until satisfactory results are obtained. Fig 3: ESP graph generated by the CSM for n=1441 X axis- locations of the surface evaluation points Y axis- Potentials at each of these points 4. Comparison between Fig 4: ESP graph generated by the CSM for n=2981 Conventional and CSM Methods Computational time of the implemented CSM based earthing grid designing program is found to be approximately 38 seconds for n=1441 point charges. If the number of point charges is increased from n=1441 to n=2981 the computational time increases from 38 seconds to 176 seconds. Value of ESP max is now changed from 1720V to 2478 V which may be seen in Fig 3 and Fig 4 respectively. The mesh potentials in both the cases are 817 V and 1247 V respectively. The maximum step voltages in both the cases are well below the tolerable limits as can be seen from the graphs below. Both the graphs Fig 3 and 4 can be analysed to get the different values of the step and touch voltages at different locations. After the detailed analysis of conventional and CSM based methods, it can be seen that the CSM approach is faster, efficient and user friendly in the context of earthing grid design. Although the conventional method has its own accuracy levels due to manual calculations but a considerable amount of accuracy is also achieved by the CSM method and the labour behind the manual calculations are also not needed here. The tolerable limits are same for both the cases as calculated by the formulae as per IEEE standards. In both the cases the considered grid is 70m 70m square without ground rods with 10 10, i.e., 100 meshes and homogenous soil formation of resistivity ρ=400 Ω-m and a crushed rock layer with resistivity ρ s =2500 Ω-m. Comparing different parameters obtained from two different approaches is presented in Table 1. Table 1: Assessment of results obtained from conventional and CSM based methods 5. Conclusions The paper intends to propose a reasonably accurate CSM based earthing grid design method which is used to generate the ESP distribution graph. An important benefit of CSM is its high accuracy of calculations. The large manual computational effort in the conventional method is not needed in CSM. The charge simulation method gives large similarity with the results obtained from the IEEE standard empirical formulae. By the study & analysis of earthing grid design with the help of charge simulation method algorithm it may be concluded that the accuracy of this method can be increased by increasing the number of point charges. In this paper, CSM based computing algorithm for earthing grid design implemented in MATLAB environment takes care of two cases of number of point charges, (n=) 1441 and The analysis of both the cases conclude that the results generated by taking n=2981 point charges are more close to the results obtained from conventional method. Thus by increasing the number of point charges, accuracy level of the proposed method can be increased further. Apart from this, the ESP distribution plot obtained here shows that the estimated maximum value of touch voltage and the mesh voltage are appearing at the corner meshes of the grid or at the periphery of the grid. From the study of square grid without ground rods it can be seen that the mesh voltage (Vm) is larger than the tolerable limit of the touch potential and hence the grid structure needs to be upgraded further. This method can be developed further with far more accuracy level for the design of more advanced and safer grid structure. References 1. IEEE Guide for safety in AC substation grounding, IEEE Std F.P. Dawalibi, D. Mukhedkar Parametricanalysis of grounding grids IEEE Transactionson Power Apparatus and Systems, Vol. PAS-98, No. 5, pp , Sep/Oct J. G. Sverak, Simplified analysis of electrical gradients above a ground grid, part I- how good is the present IEEE method?, IEEE Trans. OnPower Apparatus and Systems, vol. pas-103, pp.7-25, Jul J. M. Nahman, V. B. Djordjevic, Non uniformity correction factors for maximum mesh and step voltages of ground grids and combined ground electrodes, IEEE Trans. Power Delivery, vol.10, no. 3, pp , Jul B. Thapar, V. Gerez, A. Balakrishnan, and D. A.Blank, Simplified equations for mesh and stepvoltages in an AC substation, IEEE Trans.Power Delivery., vol. 6, no. 2, pp , Apr S.Ghoneim, H. Hirsch, A.Elmorshedyand R.Amer, Optimization Technique for Grounding Grids Design, JOURNAL OF ELECTRICAL AND ELECTRONIC SYSTEMS RESEARCH, VOL.1, JUNE E. Bendito, A. Carmona, A.M. Encinas, M.J.Jiménez, The Extremal Charges Method in Grounding Grid Design. 8. S.Ghoneim,KalubiaӒgypten,.Dr.Ing. Holger Hirsch, Optimization of Grounding Grids Design with Evolutionary Strategies.

63 90 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 91 Effect of property management system (PMS) on hotels. A case study on the star hotels of Siliguri Abstract Govind Baibhaw 1, Soumyadipta Mitra 2 Assistant Professor Department of Hotel Management & Catering Technology Siliguri Institute of Technology Siliguri, Darjeeling, West Bengal 1 govind.baibhaw@gmail.com, 2 soumyadipta85@yahoo.co.in With the enormous growth of hospitality industry, the facilities provided by Hotels are increasing excessively. With these growth rate hotels are functioning and are filled with the guests whose expectations are on the peak. To satisfy the wants and needs of customers who are often called Guests and regarded and treated as the GOD, Property Management System becomes a very crucial key for success. It became impossible for the star category hotel to maintain their business manually. The Hotels were now not limited to the four core departments viz. Front Office, Housekeeping, Food and Beverage Service Department and Food Production Department. Maintenance and Engineering along with Human Resource department, Sales & Marketing department, Purchase Department, Security and Spa, it was a challenge for the industry to maintain the co-ordination so that the Guest must be served with the best. Property Management System is also known as Hotel Operational System. Property Management System (PMS) is a tool that enables the hotels to interact within the departments and customers more easily and productively. PMS tool is the new facet of the hospitality industry where the small and medium sized hotels are rapidly adopting the system. This new technology runs the hotel functions smoothly by decreasing the dependency on the human behavior. Technologies have now streamlined lot of processes making travelling easier. Most hotels in Siliguri are using the PMS but it is not up to the mark. The findings revealed that current usage of PMS are not as expected. It was found that as the level of star hotel increased, the usage of PMS also increased. Finally, in this work the suggestions to improve PMS usage have been highlighted. Keywords : PMS, Hotel Industry, Star Hotels, HRACC (key words) 1. Introduction With the advent of 21st century, the electronic communication has allowed hotels to reach consumers around the world through hotel websites, travel search engines and travel agents. When the concept of computers was developed in 1950, the airlines were the first to witness the potential in using the technology to create electronic distribution of airfare. The concept of electronic distribution developed from user interface system created by the airlines for inventory control according to Cornell University s Peter O Connor. Information on hotel website gradually became more consumers friendly and travellers directly came in contact with hotels. Rapidly rising in the inflow of customers made hoteliers realize that direct connection from their own website to their CRS was very much necessary. Hotel PMS may interface with CRS yield Management, Front office, back office, POS, and other factors, with other hotels Inventory management which is the major part of the hotel PMS. A good PMS should give accurate and timely information to its users. The leading PMS brands that are ruling the hospitality sectors are as follows: Skyware PMS Hoteliga Ezee Absolute OPERA Oracle Amadeus PMS plays a vital role in all the hotels of all the areas. Siliguri, the gateway towards the hills of North east regions is the hub of the hospitality industry. Most renounced tourist places like Darjeeling, Sikkim, Kurseong etc are visited by different tourists from all over the nation are supposed to take a halt at the gateway that is, Siliguri. International tourists also visit these places and these places act as the tourism hub. So it is expected that tourism is the value added service of Siliguri. Similarly, the hotels of Siliguri should also get the priority in terms of their facilities they are providing to its visitors. PMS is such software that is necessary for all the hotels for their smooth functioning. The survey was conducted in the leading hotel brands of Siliguri. Milestone hotel under The Summit Group of Hotels, Royal Sarovar Premiere and The Cinderella, these are the three star hotel brands of Siliguri. The services that they provide represent the class of the brands they are holding. But PMS which should be the integral part of a hotel to function out the regular jobs is not getting its value properly. The information is regularly maintained manually and the documents that are prepared are the part of the semi automated system. We received a positive response only from Royal Sarovar Premiere regarding the PMS because they are using IDS software for their hotel, which is the most renounced international system that the hotels are using on a worldwide basis. We had some close ended questions which all the hotels responded positively. According to the survey, mostly the hotels in Siliguri are not using the PMS, though it is the essential part of the hospitality industry. The property like Sarovar Premiere is using the PMS, IDS but the integrated activities like management of room numbers, interaction with external systems, reservations of outlets are not done by the Software. So accessing anywhere at any point of time is not affective as the PMS is not successful in Siliguri. The objective of this survey is to find out the usage of PMS in the hotels of Siliguri which is the hub of the hospitality industry. The PMS which plays a vital role in the hospitality sector is not getting prioritize in the hotels of Siliguri. The objective is to find out that if the hotel is using the software also, but the software is not effective enough to survive in the race of the best. The objective of this study is for the guest satisfaction in the services like express check ins and check outs, wakeup call and such other value added services. The GDS was sabre/ a semi-automated business environment which was introduced in In 1960 the hotels begin to see the opportunities that electronic distribution provided and started developing their own systems. Many hotel brands adopted hotel identities based on airlines city codes, some of those are still used today. In 1988, sixteen hotels formed the hotel industries switch company.

64 92 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Background 2.1 Review of Literature Discussion and Findings 1. Zorica Krželj-Čolović of University of Dubrovnik, Department of Economics and Business Economics and Zdenko Cerović of University of Rijeka, Faculty of Tourism and Hospitality Management Opatija analysed in their research Implementation of Property Management System in hotel industry that one of the major component of information and communication technologies (ICT) is PMS which is highly under-utilised due to lack of proper training in the area. 2. Ali Yousaf, Supervisor and Anders Steene, Associate Professor of Södertörns University Department of Business Studies states in their research, The Impact of ICT in the Eyes of Hotel Managers (Cyprus) that the managers accept the importance of ICT as well as PMS in their hotels but research on its effectiveness on the operations of hotels is lacking. 3. Vasuki Bellary of University of Nevada, Las Vegas has done A Case study on the effect of information technology related interface issues on overall guest experience in Hyatt Place hotels in the U.S. and found that the uses of PMS and other information technology tools were highly effective unless the staffs are aware and trained into that. 2.2 Research Gap Siliguri is one of the regions in West Bengal which is not getting a hike in its name in the other parts of the nation, though it is one of the most remarkable landmarks of the tourist places like Darjeeling, Gangtok etc. The beauty of doers is also at a one hand distance from Siliguri and these places are mostly visited by the tourists not only from all over India but also from many different countries. Still Siliguri has remained unknown to many people in the nation and research on the hotels of this area remained unattempted. The gap has been established in the research projects with the implementation of PMS in the hospitality sectors of Siliguri. PMS acts as the nerve centre for the hotels but it has been neglected in the properties of Siliguri. There are many researches done by many respected authors regarding the PMS in the hotels all over the world. But Siliguri, though it has marked a remarkable position in the map of India for its identification, still did not get its priorities in the pens of researchers. 2.3 Objective The objective of this research is to bring in the limelight the role of PMS in the hotels of Siliguri. The PMS is the vital part of the hotel industry. The role of PMS is of utmost importance when there is ample of data needs to be dealt every day. The prime objective of the research is to ascertain the uses and the efficiency of PMS and the effort of employees towards the satisfaction of guests. As the objective of the hotel is to always achieve par excellence in the guest satisfaction, the value added services become an important part. Through this research, the effect of PMS in delivering value added services is also needed to be ascertained. 2.4 Methodology Seeing the astonishing popularity of Property Management System among hoteliers a questionnaire based survey was done in the three hotels (Royal Srovar Premiere, The Cinderella, The Milestone) of Siliguri to know the effectiveness of PMS in the operation of the hotels. Three hotels of three types were selected. Hotel chain with central reservation, hotel chain without central reservation system & standalone. Descriptive survey was done for the study. Three (3) star hotels of Siliguri were selected. Table-1 shows the mode of carrying out the work by the different hotels. The table depicts that hotel with central reservation system are greatly influenced by the advent of Information and Communication technology (ICT) and are carrying out 70% of their work digitally but still their 20% of the work is carried out manually which is a significant figure. Chains without Central reservation system are using less of the technology and their 60% of the work are digitally managed and 20% is manually whereas 20% of the total work is done manually as well as digitally. Standalone hotels are not aware of the Information and Communication technology (ICT) and their 70% works are manual and 20% of work is digitally managed. Uses of PMS Table-II shows that the hotel chain with central reservation system & the hotel chain without central reservation system are using PMS whereas standalone hotels are not using PMS. Table III shows the investment done to procure the PMS by the different types of Hotels. Hotel in Siliguri with central reservation system are using PMS but the standalone hotels are not in the position to maintain it. The data were analyzed using descriptive statistics to know the facts related to the effectiveness of PMS in the operation of the hotels.

65 94 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 95 A Study of Potential Distribution on Disc Insulator Using Charge Simulation Method 1 Tanmay Saha, 1 Suman Dutta, 1 Kushal Sarkar, 1 Saswata Goswami, 2 Dr. Santanu Das 1 3rd Year Students, Electrical Engineering Department Jalpaiguri Govt. Engg. College 2 Associate Professor, Electrical Engineering Department, Jalpaiguri Govt. Engg. College Abstract Table IV depicts that even the hotels with central reservation system are using PMS but many features are unavailable or they are not in a position to use it. Conclusions The study concludes the percentage of the star category hotels that are using PMS in the Siliguri area. The PMS needs to get more developed and effective for the smooth functioning of the hotels in the said area. The hotels need to get more aware of the features of the PMS and special training needs to be conducted for the staff of the hotels. Based on the findings of this study, the following recommendations were made: (1) Special PMS training needs to get conducted in the hotels of Siliguri by the expert hoteliers. (2) The staff awareness for the PMS needs to get increased. (3) The cost of accessing/using the information & communication technologies should be subsidized or reduced. (4) More of multinational companies providing PMS should make themselves available in the city. (5) Computer literacy must be made a prerequisite condition to be employable in hotel industry. (6) Post and pre-employment trainings must be imparted to employees. References 1. Jatashankar R.Tiwari, Hotel Front Office Operations & Management, Pg No S K Bhatnagar, Front Office Management, Pg No Zorica Krželj-Čolović ofuniversity of Dubrovnik,Department of Economics and Business Economics and Zdenko Cerović, University of Rijeka,Faculty of Tourism and Hospitality Management Opatija IMPLEMENTATION OF PROPERTY MANAGEMENT SYSTEM IN HOTEL INDUSTRY 4. Ali Yousaf, Supervisor and Anders Steene, Associate Professor of Södertörns University,Department of Business Studies states in their research The Impact of ICT in the Eyes of Hotel Managers (Cyprus) 5. Vasuki Bellary of University of Nevada, Las Vegas has done A Case study on the effect of information technology This paper presents a method to study potential distribution evaluation of a disc insulator through an implemented analytical scheme based on Charge Simulation Method (CSM). Insulator strings in power systems serve dual purposes of providing mechanical support and electrically isolating the live phase conductor from support tower. Depending upon the geometrical shape of the insulator, the electric field distribution and voltage distribution become mostly uneven which may lead to insulators surface deterioration, corona and even flashover. So, numerical computing methods capable of quickly providing electric field and potential distribution with reasonable accuracy may demand great importance in the context of condition assessment of electrical insulators. In this paper, a CSM based numerical computing method has been proposed to assess the electric field distribution of disc insulator. After comparing the results obtained from the proposed scheme with actual results available in literature for similar type of analyses, it has been found that the proposed method is providing reasonably accurate results. Keywords : Electric field distribution; Voltage distribution; Charge Simulation Method, Fictitious charges 1. Introduction Electric potential distribution of an insulator under a certain applied voltage for a specific medium is very important aspect to design an insulator. For calculation of potential there are various techniques available such as Finite Element Method (FEM), Finite Difference Method (FDM), Boundary Element Method (BEM) and Charge Simulation Method (CSM). Among the numerical methods stated above, the CSM is a simple, accurate, and easy-to-code method. Moreover, it is very suitable for electric potential analysis of the open boundary and irregular profile problems [1]. In the CSM, the potential and the electric field are computed traditionally by assuming a set of fictitious charges, which are assigned inside the region of interest. The assumptions thus made often play a significant role in the accuracy obtained from the CSM. A disc insulator of given potential having a fixed radius is taken outside which the potential distribution is desired to be formulated using CSM and with the aid of MATLAB. The disc insulator is assumed to be composed of several fictitious ring charges and contour points on the periphery from which the charge matrix is calculated using the electrostatic equations of CSM [2].Using this charge matrix the potential distribution is plotted against the distance from the surface of the insulator. 2. Scope of the Project Work The objective of the presented work is to analyze the potential distribution outside a disc insulator using Charge Simulation Method. A disc insulator of fixed radius is taken having potential of 11KV. Fictitious charges are distributed on the surface of insulator and Contour points are taken along its periphery. Using these information, the charge matrix is determined, from which the potential distribution outside the surface is plotted radially.

66 96 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Charge Simulation Method According to charge simulation method (CSM) [2-3], if several discrete charges of any type (point, line or ring, for instance) are present in a region, the electrostatic potential at the i th contour point can be found by taking summation of the potentials resulting from the individual fictitious charges (Q j ) as long as the point does not reside on any one of the charges. Let Q j be a no. of n fictitious charges and U i be the potential at any point C within the space. Contour points are placed on the insulator boundary corresponding to each fictitious charges within the conductor. The potential at any given contour pt. i th is given by, Fig 1: Potential Distributions as generated by MATLAB X-axis Potential Distribution at each of these points Y-axis Distance from surface of disc insulator where x i,y i,z i are co-ordinates of i-th contour point and x j,y j,z j are co-ordinates of j-th point charge. The equation (1) leads to a system of linear equation of n-charges which can be expressed as, [U] =[P] [Q] (4) Solution of equation (4) gives a matrix of fictitious charges. Using this matrix of fictitious charges, the potential distribution outside the surface of the insulator is calculated. The equation is given by, Where, k is any point outside the surface of the insulator. 4. Results and Analyses The method of CSM as described above is applied to a disc insulator. The disc insulator is applied with 11kV dc voltage. The disc of radius 5m is made up of porcelain (ϵ = 6).As shown in Fig 1. the potential distribution is as shown. The field distribution can also be shown using scatter plot as shown in Fig 2. The disc is taken in x-y axis and the potential distribution is shown along z-axis.the Potential decreases gradually as the distance from the surface of insulator increases. After the detailed analysis of conventional and CSM based methods, it can be seen that the CSM approach is faster, efficient and user friendly in the context of potential distribution outside the periphery of disc insulator. Although the conventional method has its own accuracy levels due to manual calculations but a considerable amount of accuracy is also achieved by the CSM method and the labour behind the manual calculations are also not needed here. Fig 2: Potential Distribution using scatter plot as generated by MATLAB using CSM X-Y: axis Radial Distance from surface of disc insulator Z-axis Potential Distribution along the radial distance 5. Conclusions As presented in this paper, charge simulation method can be applied to calculate potential distribution across any three dimensional systems at any given point. The presented approach gives an idea of variety of possible applications. Further development is also possible. The method is very simple and is developed from basic electrostatic equations. An important advantage of CSM is its high accuracy of potential field calculations which can be achieved without large manual computational effort. From the potential distribution plot obtained here shows that as the distance increases from the periphery of insulator, potential decreases gradually. Correct placements of fictitious charges and contour points increases the accuracy of results. Also by increasing the number of point charges, accuracy level of the proposed method can be increased further. References 1. W.-S. Chen et al.: Optimal Design of Support Insulators Using Hashing Integrated Genetic Algorithm, IEEE Transactions on Dielectrics and Electrical Insulation Vol. 15, No. 2; April G. Ininahazwe*, and I. E. Davidson1 A Study of the Distribution of Voltage and Electric Field on Glass Insulators using Charge Simulation Method, 24th Southern African Universities Power Engineering Conference, January 2016, Vereeniging, South Africa. 3. Nazar H. Malik A Review of the Charge Simulation Method and its Applications, IEEE Transactions on Electrical Insulation Vol. 24 No. 1, February 1989

67 98 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 99 Modified Carry Increment Adder (CSLA-CIA) Abstract Snehanjali Majumder 1, Nilkantha Rooj 2 Electronics and Communication Engineering Jalpaiguri Government Engineering College, JGEC 1 snehanjali.maj@gmail.com, 2 nilkantharooj48@gmail.com This defines a 4bit RCA-CIA block and thus 8 bit,16 bit blocks can be designed similarly, Reference[2] has been used here. 2.2 Conventional CSLA Conventional Carry Select Adder consists of number of blocks in which RCA blocks are used in each stage. The principle on which Conventional CSLA works is:two additions are executed at the same time where the blocks calculate with carry values Cin= 0 and Cin=1 sequentially. For the addition purpose RCAs are used. Therefore, Conventional CSLA uses pairs of RCA blocks. After the carry-in values are assigned to the blocks, the absolute value is then used to choose the proper sum from one of the two blocks. Now,carry-out values are to be calculated which in turns decides the sum bits and carry out of the succeeding block. The paper describes a modified version of carry increment adder which shows a lesser delay than the conventional Carry select adder (CSLA).This adder obviously also has advantage over the conventional Ripple Carry adder(rca). Here it is demonstrated that such adders are comparatively faster even though they have the same area. Here the designs have been implemented in Cadence Virtuoso tool using 045nm technology and all the results are collected from them after simulation. Keywords : CSLA-CIA, delay,rca,conventional CIA,RCA-CIA,CSLA-BEC. 1. Introduction Performance of an integrated circuit depends upon the speed, area, delay and power consumption. Addition being the fundamental arithmetical operation,binary adders are one of the most popular components in design of integrated circuits and used as a basic building block of ALU. Since the propagation of carry is of prime importance, in this paper we have developed a fast adder in some respect. So, if the calculation of sum is made fast in an adder and the internal delay propagation be reduced than it will be efficient. Adders are commonly used as the fundamental logic segments of microprocessors and digital signal processing chips. Adders are also used in subtractor and multiplicator circuits. The precision of a digital logic block is directly proportional to the performance of its adder components. The essential parameters for determining the standard of adder designs are fan-outs, area and propagation delay. 2. Background 2.1 Conventional CIA The conventional CIA works on a RCA block. After the sum has been generated in the first two stages the carry out and the sum of the next block will be put into a Half Adder and the third sum can be calculated as the half adder sum. Output carry generated from the half adder and the succeeding sum from RCA will again be put into the half adder and in this way the next sum will also be calculated. Ultimately, the output carry from the Ripple Carry Adder is ORed with the carry from the block of half adder to obtain the last carry from the block. This is the logic of the conventional CIA or the RCA-CIA as they call it. Fig 2: Conventional CSLA block CSLA works on the principle of probable sum calculation considering carry as 1 and 0 prior to the arrival of the true input carry. After the true carry is inserted as the input,the sum of each stage is produced by choosing each block carry as either 0 or 1.The common way has been that linear designs use two half bits of adders,one for evaluating the upper significant bit and the other for the lower significant bits. Naturally while doing the same activity twice,the area needed increases but the speed of the operation increases. Carry Select Adder using BEC Logic : BEC is used in lieu of the RCA to minimize the area and power consumption of the conventional CSLA. Instead of 2 n-bit RCA blocks, only one n+1-bit BEC block is needed. One input of the 8:4 mux is the input(b3,b2,b1,andb0)and the other is the BEC output. This generates two possible results at the same time and the multiplexer is used to choose either the BEC output or the direct inputs based on the command flag Cin. B [3:0] X [3:0] Table 1: Truth table of BEC Fig 1: Conventional CIA block

68 100 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 101 The logical expressions for a 4 bit BEC-CSLA are: X0 B0, X1 = B0 B1, X2 = B2 (B1 B0), X3 = B3 (B2 B1 B0). As it is seen that Carry Select Adders,however fast maybe, has a tendency to occupy a large area since it has made use of multiple blocks of Ripple Carry Adders. BEC is a bit of improvement as it uses comparatively less gates due to its inverting technique. 3. Modified CSLA-CIA (Proposed) After the sum has been generated in the first two stages the carry out and the sum of the next block will be put into a Half Adder and the third sum can be calculated as the half adder sum. Output carry from half-adder block and the succeeding sum from Carry Select Adder block will again be put into the half adder and in this way the next sum will also be calculated. To obtain the ultimate carry output of the block, we Or the final carry from the Carry Select Adder block and the carry evaluated by half-adder segment. This is the logic of the proposed CIA or the CSLA-CIA as they call it. This defines a 4bit CSLA-CIA block and thus 8 bit,16 bit blocks can be designed similarly. Table 2: Comparison among different adders based on results from our simulation. Comparison ADDER TOPOLOGY Average Delay(ns) Average Power Gate count (E-6) CONVENTIONAL CSLA 4 BIT CONVENTIONAL CSLA 8 BIT CONVENTIONAL CSLA 16 BIT CSLA WITH BEC 4 BIT CSLA WITH BEC 8 BIT CSLA WITH BEC 16 BIT MODIFIED CSLA-CIA 4 BIT MODIFIED CSLA-CIA 8 BIT MODIFIED CSLA-CIA 16 BIT Conclusions We have compared and studied the gate count, delay and power of all the different adders like Conventional CSLA, CSLA with BEC and then modified our own design of the recommended CSLA-CIA. In this advanced scheme, the Carry Select activity is performed preceding the evaluation of the binary sum and this is where it diverts from the designs made till now. [Reference 4,5 has been used here]. References Fig 3: Modified CSLA-CIA block Fig 4: CSLA-CIA 16 Bit Schematic The simulation study of the suggested CSLA_CIA is shown here. The simulation as well as the design synthesis is carried under Cadence virtuoso environment. We have designed various types of Carry Select Adders using Cadence Virtuoso tool using 045 nm technology. During the simulation of these adders we have calculated the different parameters like Propagation delay, Power dissipation and Gate count for these adders. We have performed the experiment for addition of 8 bit binary numbers. The CSLA_CIA is built using two modules of 4-bit RCA and the incremental circuitry by the half adders. This indicates that CSLA _CLA is faster than Conventional CIA. Therefore it is quite obvious that CSLA-CIA will be better than Conventional CIA in terms of delay performance. [Reference 2]. 1. Reto Zimmermann and Hubert Kaeslin, Cell-Based Multilevel Carry-Increment Adders with Minimal AT- and PT- Products,IEEE Transactions on VLSI. 2. Aribam Balarampyari Devi, Manoj Kumar and Romesh Laishram, Design and Implementation of an Improved Carry Increment Adder. National Institute of Technology, Manipur-India 3. Jasbir Kaur,Lalit Sood, Comparision between various types of Adder topologies IJCST Vol.6, Issue 1,Jan K.Bala Sindhuri Implementation of Regular Linear Carry Select Adder with Binary to Excess-1 Converter International Journal of Engineering Research Volume No.4, Issue No.7, July G.B. Rosenberger, Simultaneous Carry Adder, U.S. Patent 2,966,305, Dec. 27, The work on carry increment adders is relatively new and not much research has been made on them.

69 102 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 103 4G Service selection for out-station students using Analytical Network Process Method Gaurab Paul 1, Surath Banerjee 1, Ranajit Midya 2, Dr. Anupam Haldar 2, Dipika Pramanik 3 1,2,3 Student, Department of Mechanical Engineering 2 Faculty, Department of Mechanical Engineering 3 Department of Information Technology Netaji Subhash Engineering College, Kolkata Abstract 4G Service selection is one of the crucial activities in the present competitive service provider scenario. Now people have a plenty of scopes to select the 4G service provider. Throughout this paper, 2 4G service providers are the 2 alternatives and there are 3 criteria on which they are evaluated, hence it becomes a multi criteria decision making case. After AHP calculations, the inter criteria comparisons are done for ANP model. For different service provider is considered in this work. Mathematical analysis show that Reliance JIO services are more preferable and this can also be analyzed in a realistic sense as out stationed students have cost constraints due to food and lodging expenditures. Hence Reliance services will be of a fruitful use to them. Keywords : 4G Service, AHP, ANP, Reliance Jio 4G, Airtel 4G Multi criteria decision making, Purchasing. 2. AHP and ANP In the AHP method a decision problem is sub-divided into 3 levels i.e. Goal, criteria, sub-criteria and alternatives. First, an inter criteria comparison is carried out by weighing the different criteria to find out the weights or importance of the different criteria. The relative weights of the criteria are placed in a matrix and eigenvectors are calculated. The value of eigenvectors gives the dominance of the different criteria. Then, the different alternatives are weighted on the different criteria and placed into a matrix. The eigenvalues of the rows containing the alternatives are calculated and placed on the specific positions or cells of super matrix. The super matrix also contains the criteria weightages of the different criteria. The super matrix is squared and AHP results are found out. In AHP calculations, the alternatives are not evaluated on the different criteria and hence we do not find out the dominant characteristic of the different alternatives. This is the main advancement made in the ANP over AHP. ANP carries out the inter criteria comparisons based on a particular alternative. The inter criteria comparisons are carried out in the same way for a particular alternative and the eigenvalues give the results of the relative importance of the different criteria. The weights are placed in specific cells of the super matrix and the super matrix is raised to the power k+1 where k is an arbitrary constant. After raising to power results appear at the corner cells. This way results are found out using ANP. Fig 1: Examples of interdependence notes: (1) uncorrelated elements are connected; (2) uncorrelated levels are connected; (3) dependence of two levels is two-way (i.e. bi-directional) Solution Methodology for 4G service selection: Decision for the selection of 4G sim card favorable for outstation students on the basis of following three criteria such as a) Connectivity, b) Internet speed and 3) Consumption Bill the two service provider i.e., a) AIRTEL 4G, b) RELIANCE JIO 4G are compared using AHP. Fig 2: Hierarchy Diagram for 4G Service selection

70 104 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 105 AHP CALCULATIONS Squaring the above matrix we get the weightage corresponding to: airtel 4g 20%; reliance jio 4g 80%, so AHP gives more preference to reliance services usage. ANP Calculations: In ANP calculations the inter criteria weightages of the two alternatives are calculated: WRT CONNECTIVITY Table 3: Inter criteria weightages of Airtel WRT INTERNET SPEED Table 4: Inter criteria weightages of Reliance Jio We enter the eigen values obtained from the above two matrices: WRT. BILL Table 1: Weights of service providers for different criteria Now we will evaluate super-matrix raised to the power (k+1), where k is an arbitrary constant to obtain the limit matrix Hence ANP calculations also show that the usage of Reliance Jio 4g services is preferable to Airtel 4g services. Table 2: Unweighted super matrix 4. Conclusion Mathematical analysis show that Reliance JIO services are more preferable and this can also be analyzed in a realistic sense as out stationed students have cost constraints due to food and lodging expenditures. Hence Reliance services will be of a fruitful use to them. We came to learn better about the MCDM methods.

71 106 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 107 Lookup Table based Genome Compression Technique Syed Mahamud Hossein 1, Aditya Kumar Samanta 2 1 Research Scholar, Vidyasagar University 2 Assistant Professor, Deptt. of Information Technology, Jalpaiguri Government Engineering College, 1 Midnapur , West Bengal, India 2 Jalpaiguri, West Bengal, India mahamud123@gmail.com 1, adityaksamanta@gmail.com 2 referred to as Lempel Ziv algorithm [2-3]. It can produce a higher compression ratio than older methods. In this regard use 7z simulator [4] 2. Methods 2.1 : Encoding and Decoding steps : Input sequence encoded by Look Up Table-I Abstract The Genome (DNA/RNA) sequence compression algorithm, is based on Lookup Table, this tables consists in two ways, first table consist factorial of 4 bases i,e 24 sub-sting, each sub-sting have 4 bases and other lookup table consist of 43 ways i,e 64 sub-sting, each sub-string have 3 bases long. These two Lookup Table act as a pre-coding routine. The proposed method may protect the data from unauthorized users. The genome sequence compression will help to increase the efficiency of the DNA/ RNA uses. This algorithm are require small memory, easy to use, each operation required faction of seconds, depend on task and size of sequence. The output of this algorithm, again compress by LZMA algorithm for reducing the compression ratio & rate. This technique can approach a compression rate of 1.87 bits /base and even lower. Genome sequence contain the base pair of A, T,G and C for DNA and T is replace by U in case of RNA. That means four character possible orientation is factorial of 4 that is 24 sub string (Factorial(4)=24). All sub-string has 4 bases long. Input sequence encoded by Look Up Table-II Keywords : Genome, Compression, LZMA, LUT and Security Abbreviations: DNA-Deoxyribonucleic acid, RNA-Ribonucleic acid, LUT- Look Up Table and LZMA- Lempel-Ziv- Markov chain-algorithm 1. Introduction The Deoxyribonucleic acid (DNA)/Ribonucleic acid (RNA) is encoding life which is nonrandom sequence. The DNA/RNA sequences consist of four nucleotide bases i,e a, c, g, t (incase of RNA t is u), each base required one byte to store. This evidence prove that the DNA sequences should be compressible but the DNA/RNA sequence compression is a very much difficult task. The DNA sequence regularities are much subtler. So, the traditional compression algorithm and DNA sequences compression algorithm have not the same properties to be counted on. The DNA/RNA sequences requires a new compression algorithm for computing the DNA content and achieved better compression result. In fact, it is our purpose to reveal such subtleties, such as Look Up Table of factorial 4 base pair consist 24 sub-string and 3 letter codons 64 sub-string Look Up Table, match with source DNA/RNA sequences by using a more appropriate compression algorithm. These two lookup table are called a static lookup table(slut). Also develop another lookup table called dynamic lookup table (DLUT), the details of DLUT discussed in paper [1]. This two algorithm, SLUT-3 and SLUT-4, based on exact matching in between Look Up Table & source file. This algorithms produce the better output on standard benchmark data and our results compare the exiting compression result. Since there are 4 possible bases (A, C, G, T/U) and 3 bases in the codon, there are 4 * 4 * 4 = 64 possible codon sequences. The braces behind each character contains the corresponding ASCII codes of these characters. For easy implementation, characters a,u/t,g,c will no longer appear in pre-coded file and A,T,G,C will appear in pre-coded file. For instance, if a segment aaugccccuuuuggga..n has been read, in the destination file as adnc9txa.n 1.Obviously, the destination file is case-sensitive 2.2 : Develop two encoding(compression) and two decoding(decompression) algorithms as : The DNA sequence compression algorithm of two Look Up Table pre-coding routine which maps the factorial of ATGC (form 24 sub-string) into 24 ASCII characters and another 3 letter codon (form 64 sub-sting )into 64 ASCII characters. Since the essence of compression is a mapping between source file and destination file, the compression algorithm dedicates to find the relationship. This work trying to build a finite LUT which implements the mapping relationship of our coding process. LZMA, short for Lempel-Ziv-Markov chain-algorithm, is a data compression algorithm. LZMA is a dictionary compression scheme that uses a larger dictionary size than the deflate method and compression scheme somewhat similar to LZ77. This improved version of the original LZ78 algorithm is perhaps the most famous modification and is sometimes even mistakenly

72 108 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Algorithm Evaluation 3.1 : Accuracy: It is not tolerable that any mistake exists either in compression or in decompression. Although not yet proved mathematically. It s unable to prove mathematically, in this regard develop of mapping algorithm for one to one checking in between input and output data. 3.2: Efficiency: This pre-coding algorithm can compress original file from 4 or 3 characters into 1 character when this algorithms run on any DNA sequences as a result the output file is less ASCII character to represent successive DNA bases than input file. 3.3: Time elapsed: Modern life required so many compression algorithms for their research, but these algorithms has take considerable time for execution but above algorithm is based on a LUT rather than sequence statistics, it can save the time of obtaining statistical information of sequence. 3.4: Space Occupation: This algorithm reads characters from input file and writes them immediately into output file. It required very small memory space to store only a few characters. The space occupation is in constant level. In these experiments, the OS has no swap partition. All performance can be done in main memory which is only few mega bytes on the PC. 4. Experimental Results This Algorithm run on standard benchmark data used in [5]. The definition of the compression ratio is the same as in [5]; which is defined as 1 ( O / I ), where I is the length(number of byte) number of bases in the input DNA sequence and O is the length (number of bits) of the output sequence. The compression rate, which is defined as ( O / I ), where I is number of bases in the input DNA sequence and O is the length (number of bits) of the output sequence. The improvement which is defined [5] as ( Ratio_of_Arith 2 Ratio_of_DLUTLZMA)/Ratio_of_Arith 2 *100. The improvement are verified on average value in each case. The compression ratio and compression rate of LUT-3, as well as those of LUT-4 are presented in the below graph. Graph-I: Line chart Shows the comparison of compression ratio of above algorithm

73 110 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 111 Table-III : showing the number of substring found Table-IV : Comparison of Compression rate Graph-II: Line chart Shows the comparison of compression ratio of above algorithm in table1i 5. Result Discussion: The above result shown that this algorithm is better with the exiting compression algorithm with respect to compression rate, ratio and elapsed time. This algorithm is use for storing the DNA sequences for future use. Also keep sequences as records in database instead of maintaining them as files. Using reverse method, users can obtain original sequences in a time that can t be felt. Additionally, this algorithm can be easily implemented while some of them will take more time to execution. From these experiments, conclude that pre-coding matching patter are same in all type of source and pre-coding Look up Table plays a key role in finding similarities or regularities in DNA sequences. Line graph declared that compression rate are same in all type of sources. Output file contain ASCII character with unmatch a,t,g and c so, it can provide information security which is very important for data protection over transmission point of view. Output result show that LUT-3 algorithm is better than LUT-4 algorithm with respect to Compression ratio, compression rate, speed for both encoding and decoding. This algorithm successfully compressed the nucleotide sequences without dataloss. Table V : Comparison of Compression rate 6. Conclusion The Look up table base DNA compression technique is the best compression method that reveals the true characteristics of DNA sequences. The LUT base compression results indicate that our method is more effective than many exiting methods. Lookup table is able to detect more regularities in DNA sequences, such as mutation and crossover, and achieve the best compression results by using this observation.

74 112 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Acknowledgement Above all, author are grateful to all our colleagues for their valuable suggestion, moral support, interest and constructive criticism of this study. The author offer special thanks to Ph.D guides for helping in carrying out the research work also like to thank our PCs. 8. References 1. Syed Mahamud Hossein et al., A Compression & Encryption Algorithm on DNA Sequences Using Dynamic Look up Table and Modified Huffman Techniques, I.J. Information Technology and Computer Science, 2013, pp Mark Nelson, Jean-loup Gailly, The Data compression Book,2nd Edition, M&T Books, New York, NY (1995). 3. S. Shanmugasundaram and R. Lourdusamy. A Comparative Study Of Text Compression Algorithms, International Journal of Wisdom Based Computing. 1, 3(2011) 4. Igor Pavlov, 7z format, 5. Xin Chen, San Kwong and Mine Li, A Compression Algorithm for DNA Sequences Using Approximate Matching for Better Compression Ratio to Reveal the True Characteristics of DNA, IEEE Engineering in Medicine and Biology,pp 61-66,July/August Abstract To Suggest A Better Strategy For Marketing Using Social Network Analysis Sayari Mondal, Sneha Saha, Madhumita Das, Kriti Pal, Nabanita Das, Chinmoy Ghosh Department of Computer Science and Engineering Jalpaiguri Government Engineering College Jalpaiguri chinmoy.ghosh@cse.jgec.ac.in Recommendation Systems plays a vital role in marketing industry. We know how influence through social networks nowadays leaves impact on a person s behavior and opinions. Traditional recommender systems used metrics likes item s rating and general user s preferences to recommend products to users. In this paper, we would utilise the information from a real online social network to make a personalised recommender system. Extracting data from social networks will not only improves the prediction accuracy of recommender systems but also provides solutions to overcome the limitations of the traditional recommender systems. 1. Introduction The marketing industry has been using the social influence ever since the social networks introduced which we commonly known as social media marketing. But in recent years, the enormous growth of digital information as well as social networks has lead to information overload. To cope up with the information overload, personalized recommender systems have been used effectively. Information retrieval systems, such as Google, Devil Finder and Altavista have played an important role to ease the way of accessing the internet. But with the increasing information, users need only those information or products which would be useful for their purpose and which can be trusted(in case of products).for such priviledges,we needed a facility which would take review of experts or users. Those reviews will be then useful for the users who is going to buy the same product or any similar product. This strategy was widely used by the online marketing sites. The success of Amazon and Netflix are the real examples which have used their own personalized recommender systems. Recommender systems are algorithms operating on large datasets that deal with the problems of information overload by filtering the desired information from the large segments of data according to individual preferences,interest. Their goal to find similar items based on users whose preferences are somewhat similar as well as the items rated. The items must also possess some similarity like the users. In the paper,we would use the measure of likelihood to calculate this. The conclusion will be based on the metrics,precision,recall and F-measure.[1] 2. Methodology 2.1 Collaborative filtering Recommender System Collaborative filtering is a domain-independent prediction technique which is established on gathering and examining a large amount of information which based on users demand or activities or preferences and taste of that particular user by using their similarity with other users. The technique of collaborative filtering can be divided into two categories:

75 114 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Memory-Based Algorithms The task in collaborative filtering is to predict the votes of a particular user from a database of users votes from a sample or population of other users. The user database consists of a set of votes based on some partial information regarding the active user and a set of weights calculated from the user database Model-based Collaborative Filtering Algorithms These algorithms provide item recommendation by first developing a model of user ratings. These algorithms take a probabilistic approach and envision the collaborative filtering process for computing the expected value of a user prediction, given the user ratings on other items Item-based Collaborative Filtering Item-based approaches use similarity between items. The unknown rating of a test item by a test user can be predicted by averaging the ratings of other similar items rated by the test user. Again each item is sorted and re-indexed according to its dissimilarity towards the test item in the user-item matrix and ratings from more similar items are weighted stronger.[2][3][4] 2.2 Content based Recommender System Content-based technique is a domain dependent algorithm. In this technique, recommendation is made based on the user profiles using features extracted from the content of the items the user has evaluated in the past. This filtering uses different types of models to find similarity between documents to generate recommendations. It can use Vector Space Model (such as Term Frequency Inverse Document Frequency (TF/IDF)), Probabilistic models (such as Naïve Bayes Classifier), Decision Trees or Neural Networks to model the relationship between different documents. 2.3 Demographic based Recommender System This system targets to categorize the users based on attributes and make recommendations based on demographic classes. In this system,algorithms first need a proper market research in the specified region accompanied with a short survey to gather data. The advantage of a demographic approach is that it does not require a history of user ratings like collaborative and content based recommender systems. 2.4 Utility based recommender system These type of recommender system makes suggestions based on computation of the utility of each object for the user. The central problem for this type of system is how to create a utility for individual users. The main advantage of this recommender system is, it can factor non-product attributes,like vendor reliability and product availability, into the utility calculation. This makes it possible to check real time inventory of the object and display it to the user. 2.5 Knowledge based Recommender System This type of recommender system tries to suggest objects based on inferences about a user s needs and preferences. This recommendation works on functional knowledge 2.6 Hybrid Recommender system Content based filtering approach can start to recommend as soon as there is information about items available. At the same time the disadvantages are items and attributes must be machine-recognizable. For lack of consideration of other people s experience, the system cannot make any assessment of a quality, style or viewpoint for the item, recommendations are based on the item s attributes, descriptions, tags among others and so missing any personality assessment. Serendipity is the ability of the system to give an item surprisingly interesting to a user, but unexpected or possibly not seen before by the user. If there are two words spelled differently but having the same meaning, CB filtering will recognize them as two independent words and will not find similarities. Most of remarkable advantages of collaborative filtering systems can be merely derived from the Content based filtering disadvantages. CF systems do not require content information about neither users nor items to be machine-recognizable. These systems can make an assessment of quality, style or viewpoint by consideration of other people s experience. It can produce personalized recommendations. It can suggest serendipitous items by observing similarminded people s behavior. Collaborative filtering approach has disadvantages, among them. It cannot produce recommendations if there are no ratings available. They demonstrate poor accuracy when there is little data about users ratings. This is called Cold-Start problem. CF systems are not content aware, i.e. information about items are not considered when they produce recommendations. Also, many of existing CF algorithms work slowly on a huge amount of data. The hybrid approach is the solution for most of the above-mentioned disadvantages. The conclusion is that a good solution would be to start with a switched type hybrid recommender system. This system will consist of three components: at least one collaborative filtering algorithm (matrix-factorization), at least one content-based algorithm, a combination of frequency inverse document frequency with k-nearest Neighbors algorithms, and a simple component selecting the most confident result, say if no user-behavior data available, use content-based algorithm or collaborative filtering otherwise.[5][6][7] 3. Experimental Study 3.1 Dataset For the experiments we have used 1M Movielens datasets Log-likelihood Coefficient: It is based on number of items in common between two users. But the value of this is more an expression of how unlikely it is for two users having so much overlap, it is given that the total number of items out there and the number of items to which user have a preference. Rating data file consists of three columns: the user ID, the item ID, and the rating value. The user and item IDs are long (64 bit) integers which are non negative in nature, and the value of rating is double (64 bit ). Each user rates at least 20 movies; rate them in a scale of one to five star. The ratings 1 and 2 are lower ratings which can be considered as negative ratings, 4 and 5 represents higher or positive ratings, 3 indicating ambivalence. Users indicates their interests by giving preference as numeric value. f(y, θ) = n fi ( yi, θ) = L(θ; y) Next step comes with the task of identifying the target users. This is performed by calculating threshold based selection. In this threshold based selection whose similarity value exceeds a specific threshold value are considered to be the neighbour of the target item. Now the prediction step occurs for the purpose of to calculate the weighted average of the neighbours rating, weighted by their similarity to the target item.[8]

76 116 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Evaluation Metrics To evaluate a recommendation system we could apply three classical information retrieval metrics Precision Recall F-measure Precision is the proportion of recommendation that are relevant to the user and are good recommendations. Recall is the proportion of recommendation that are in the list of top recommendation. Where tp is the number of items recommended to the user is interesting, fp is the number of items recommended to the user is not interesting, fn is the items that are interesting but not recommended. Precision and Recall metrics are inversely proportional to each other. References 1. Jianming He and Wesley W. Chu. A Social Network-Based Recommender System (SNRS) (2010). 2. Alexander Felfernig, Michael Jeran, Gerald Ninaus, Florian Reinfrank, Stefan Reiterer, and Martin Stettinger.Basic Approaches in Recommendation Systems(2013). 3. Daniar Asanov. Algorithms and Methods in Recommender Systems(2015). 4. Roshni K. Sorde, Sachin N. Deshmukh. Comparative Study on Approaches of Recommendation System(2015). 5. F.O. Isinkaye a, Y.O. Folajimi, B.A. Ojokoh. Recommendation systems: Principles, methods and evaluation(2015). 6. Min Gao, Bin Ling, Quan Yuan, Qingyu Xiong and Linda Yang. A Robust Collaborative Filtering Approach Based on User Relationships for Recommendation Systems(2014). 7. Lalita Sharma, Anju Gera. A Survey of Recommendation System: Research Challenges.(2013). 8. Dr. Senthil Kumar Thangavel, Neetha Susan Thampi, Johnpaul C I. Performance Analysis of Various Recommendation Algorithms Using Apache Hadoop and Mahout 9. Peter CasinelliEvaluating and Implementing Recommender Systems As Web Services Using Apache Mahout F-measure is the statistics test s accuracy. Top 5 recommendation for item ID 1 according to similarity value 4. Conclusions For any analytics platform recommendation system behaves like a natural fit. They includes processing of the large amount of data from the consumers that are collected online and the results evaluated serves the real time online application. The primary thing that is being used increasingly is Hadoop for building any recommendation system. So far by experiments our results mainly focusses on evaluating the accuracy metrics of classification using Apache Hadoop and Mahout. Mahout can conduct large amount of structured data, applying machine learning algorithms. But now a days size of the data is swelling in unstructured format. It is not possible now to carry out only with the help of Mahout. Combining Apache Hadoop and Mahout for the recommendation purpose,it can recommend large amount of both the structured and the unstructured data efficiently at a rapid rate. Apache Hadoop is not competent with the random access of the real time purposes. Therefore instead of storing the Hadoop sequencefile in the Hadoop Distributed File System,we should use Hbase or Sqoop for retrieving the real time data. If we combine both the item-based and theuser-based collaborative filtering, recommender system can showcase accurate recommendation to the intended users.[9]

77 118 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 119 An Efficient Method for Twitter Text Classification using Two Class Boosted Decision Tree Abstract 1 Somnath Rakshit, 2 Prof. Srimanta Sinha Roy 1 Dept. of Computer Science and Engineering, Jalpaiguri Government Engineering College, Jalpaiguri West Bengal, India somnath@cse.jgec.ac.in 2 Dept. of Mathematics, Barrackpore Rastraguru Surendranath College West Bengal, India srimanta.sr@gmail.com Twitter is a very popular micro-blogging site where users express their opinion about various events through status messages called Tweets. In this paper, we propose a model to classify Tweets into two classes of sentiment according to the emotional state they convey, that is, happiness or sadness. It is our hypothesis that high accuracy can be obtained in performing this task using machine learning techniques. We have taken a supervised approach to this process. Generally, this type of automated classification is highly useful while researching the public mood regarding a particular topic. Keywords : Twitter; Boosted Decision Tree; Classification; Sentiment Analysis; 1. Introduction Twitter has become an immensely popular micro-blogging site in recent times. With over 313 million active monthly users as of February, 2017, it has become a useful medium that is used by the public to express their opinions regarding various issues globally [2]. Huge amount of research has been done on text classification with text from various news articles, which contain text written in formal language but lesser attention has been given to text classification using text written in informal language such as tweets [3]. People use Twitter to post about almost anything and everything under the Sun [14]. Given the huge number of tweets, it is a vast mine of information containing the customer s perspective, waiting to be extracted. This information can be highly useful to analysts, marketers and policy makers [8]. The purpose of the paper is to develop a suitable algorithm that can classify tweets based on its emotional state into two categories, positive or negative. We focus our attention on investigating whether we can automate the process of classifying a tweet according to its emotional state. In this paper, we propose a model to perform this task using Two Class Boosted Decision Tree. 2. Background Sentiment analysis has been one of the most happening fields under Natural Language Processing. Research found in (Pak, Alexander, and Patrick Paroubek, 2010) shows that features from an existing sentiment lexicon were somewhat useful in conjunction with micro-blogging features, but the micro-blogging features (i.e., the presence of intensifiers and positive/ negative/neutral emoticons and abbreviations) were clearly the most useful [8, 6]. Also, (Go, Bhayani and Huang, 2009) conclude that machine learning algorithms (Naive Bayes, maximum entropy classification, and support vector machines) can achieve high accuracy for classifying sentiment when trained with emoticon data [1]. Other papers like (Jansen et al. 2009; O Connor et al. 2010; Tumasjan et al. 2010; Bifet and Frank 2010; Barbosa and Feng 2010; Davidov, Tsur, and Rappoport 2010) have also studied sentiment analysis on Twitter data [4]. 3. Proposed methodology 3.1 Preprocessing We use the labelled.csv data from Crowd Flower (Sentiment Analysis: Emotion in Text) for our experiment. It consists of 10,362 tweets, each labelled according to their emotional state (happiness or sadness). Firstly, we check if the data set contains any missing labels or not [5,6]. We see that the data set has all 10,362 rows labelled and there is no unlabelled row. Next, we replace all punctuation marks and other special characters in the tweets present in the data set by space character. We can use the Feature Hashing module to transform a stream of English text into a set of features represented as integers. We can then pass this hashed feature set to a machine learning algorithm to train a text analysis model. 3.2 Feature Hashing The data which is obtained after cleaning the data, is now passed on to the Feature Hashing module. The Feature Hashing module works on the Vowpal Wabbit framework. We set N-grams equal to 1 and Hashing Bitsize equal to 12. This means that 2^12 = 4096 features are used [9]. 3.3 Training the data set We use the Two Class Boosted Decision Tree to train our model with the following parameters: Table 1: Parameter List for Two Class Boosted Decision Tree Maximum number of leaves per tree 20 Minimum number of samples per leaf node 10 Learning rate 0.2 Number of trees constructed 100 Proposed algorithm We have used Two Class Boosted Decision Tree to predict the emotional attribute of each of the tweets in our data set [11, 12]. We provide the preprocessed data set as the input. Our model is used to train based this data set. Then, it returns the predicted labels as output. 3.5 Method The boosted decision tree algorithm in Azure Machine Learning uses the following boosting method [7,13]: 1. At first, we have an empty ensemble of weak learners. 2. Then we get the current output of the ensemble for each of the training examples. 3. The gradient of the loss function is then calculated for each example. 4. The example is used to fill a weak learner by using the gradient as the target function. 5. This weak learner is added to the ensemble. Its learning rate is considered to be its strength. If needed, we again go to Step 2. The weak learners in this implementation are the least-squares regression trees, which use the gradients calculated in Step 3 as the target. The trees are subject to the following restrictions: They are trained up to a maximum number of leaves. Each leaf has a minimum number of examples to guard against overfitting. Each decision node is a single feature that is compared against some threshold. If that feature is less than or equal to the threshold, it goes down one path, and if it is greater than the threshold, it goes down the other path. Each leaf node is a constant value.

78 120 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 121 The tree-building algorithm greedily selects the feature and threshold for which a split will most decrease the squared loss with regard to the gradient calculated in Step 3. It is subject to a minimum number of training examples per leaf. It repeatedly splits until it reaches the maximum number of leaves, or until no valid split is available. Features are discretized and binned prior to training, so only a relatively small set of threshold candidates are considered, even for continuous features. The Cross Validation Model is then applied. It takes the untrained dataset as an input and divides the dataset into some number of subsets (folds). Then, it builds a model on each. Finally, it returns a set of accuracy statistics for each. By comparing the accuracy statistics for each fold we can interpret the quality of the data set and understand whether the model is susceptible to variations in the data [10]. 4 Experimental Result and Analysis 4.1 Data id_nfpu label features Table 2: (Features present in the data set): Experimental Setup We have used Microsoft Azure Machine Learning Studio to solve our problem. We choose the Two Class Boosted Decision Tree to solve the problem. Then we configured the model as below: Maximum number of leaves per tree = 20 Minimum number of samples per leaf node = 10 Learning rate = 0.2 Number of trees constructed = 100 This is a unique identifier for each row of data This shows the emotional state of the tweet This contains the Tweets Finally, we train the model using this configuration. Then, we evaluate and report the results of this experiment. 4.2 Experimental Result Considering a binary evaluation measure B(tp, tn, fp, fn) that is calculated based on the true positives (tp), true negatives (tn), false positives (fp), and false negatives (fn). The macro and micro averages of a specific measure can be calculated as follows: q 1 Bmacro = B( tpl, tnl, fpl, fnl) q l = 1 q q q q B B( tpl, tnl, fpl, fnl) = micro l= 1 l= 1 l= 1 l= 1 where λ is lable to L= {λ j :j=1...q} is the set of all labels. Table 3: Performance analysis of the proposed method 5. Conclusion Our experiment shows that by using a Two Class Boosted Decision tree Algorithm, we can obtain a high accuracy of 84.8% in classifying Tweets according to their emotional state. Thus we can use this model to classify tweets with relatively high accuracy into two groups, positive and negative, based on their emotional attributes. Thus, this can be used by data analysts and marketers to know previously unknown information about the customer s perspective. References Overall accuracy % Average accuracy % Micro-averaged precision % Macro-averaged precision % Micro-averaged recall % Macro-averaged recall % 1. Go, Alec, Lei Huang, and Richa Bhayani. Twitter sentiment analysis. Entropy 17 (2009): Company About, Twitter, 3. [Kouloumpis, Efthymios, Theresa Wilson, and Johanna D. Moore. Twitter sentiment analysis: The good the bad and the omg!. Icwsm (2011): [O Connor, Brendan, et al. From tweets to polls: Linking text sentiment to public opinion time series. ICWSM (2010): Sarlan, Aliza, Chayanit Nadam, and Shuib Basri. Twitter sentiment analysis. Information Technology and Multimedia (ICIMU), 2014 International Conference on. IEEE, Haddi, Emma, Xiaohui Liu, and Yong Shi. The role of text pre-processing in sentiment analysis. Procedia Computer Science 17 (2013): Dietterich, Thomas G. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine learning 40.2 (2000): Pak, Alexander, and Patrick Paroubek. Twitter as a Corpus for Sentiment Analysis and Opinion Mining. LREc. Vol. 10. No Feature Hashing MSDN - Microsoft, Here, Accuracy = (TP + TN) / (TP + FP + TN + FN) Precision = TP / (TP + FP) Recall = TP / (TP + FN) The following criterion have been used to measure and evaluate our model.

79 122 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 123 Selection of Electric Car by using Fuzzy AHP-TOPSIS Method Abstract Projit Mukherjee 1*, Tapas K. Biswas 2 and Manik C. Das 3 1,2,3 Department of Automobile Engineering, MCKV Institute of Engineering Liluah, Howrah , West Bengal, India 1 projit.mukherjee.96@gmail.com 2 tapasbiswasmckv@gmail.com 3 cd_manik@rediffmail.com Due to rapid depletion of fossil fuel and need for pollution free environment have leaded the automobile industry to switch over from conventional fuel fired vehicles to electric vehicles. This paper deals with selection of appropriate electrical vehicle by using integrated Fuzzy AHP-TOPSIS method. Electric vehicles are gradually becoming popular in passenger car segment due to some other factors which include- efficient battery and energy management technologies, increased oil prices, reduced greenhouse gas emissions, governments subsidies and other incentives etc. The manufacturers have come up with different types of electric vehicles having variety of features to meet customer demand. From customers perspective to select the best electrical vehicle by considering desired criteria has become a research issue in multi-criteria environment. This paper considers various technical and operational attributes like fuel economy, base model pricing, top speed on a flat road, quick accelerating time and battery range for selection of the vehicle. Fuzzy-AHP method has been used to obtain the relative weights of the criteria, on the basis of which the alternatives are evaluated using TOPSIS. Keywords : Electric car selection, MCDM, Fuzzy AHP, TOPSIS. 1. Introduction Electric cars are significantly softer than conventional internal combustion engine. It does not emit tailpipe emission, giving a large reduction of local air pollution and can give a significant reduction in total greenhouse gas and other emissions. It also typically generate less noise pollution than internal combustion engine vehicle, whether at rest or in motion. Electric car also provide for independence from foreign fuel, which in several countries is cause for concern about fossil fuel. Energy is not consumed while the vehicle is stationary, unlike internal combustion engines which consume fuel while idling. In electric vehicle, for long distance travelling, many cars support fast charging that can give around 80% charge in half an hour using public rapid chargers. Although the electric car has some limitation like battery cost is still relatively high, and because of this, most electric cars have a more limited range and a somewhat higher purchase cost than conventional vehicles. Sometimes drivers can also suffer from range anxiety- the fear that the batteries will be exhausted before reaching their destination. But tendency of purchasing electric vehicles are increasing rapidly in last decade. Therefore electric car purchasing in the market is very tough task to the customers. The selection of automobile is important for the customer due to availability of many brands with variety of models and features. As there are various types of make and model it s difficult to choose the right one and it s become a critical decision making problem. There are many multiple criteria decision making (MCDM) methods available for selection and ranking of the alternatives. TOPSIS is one of these. This technique provides a base for decision-making processes where there are limited numbers of choices but each has large number of attributes. In this paper, we consider some cost related criterion like car model cost, EPA rated combined fuel economy, top speed on flat road, quick acceleration and battery range. Good vehicle means good mileage, less fuel consumption. The range of an electric car depends on the number and type of batteries used. Here maximum battery range means the distance travelled by car with one full charge of battery. As electric car speed generally is low, therefore maximum top speed is selected for choice of electric car. The quick accleration criterion means the time taken to accelerate from 0 to 60 mph. Vehicle cost is a another important criterion. Here BMW i3, Chevy Bolt, Chevy Spark, Fiat 500e, Ford Focus Electric, Mitsubishi i-miev, Tesla Model S, have been chosen as alternatives for selection and ranking under passenger car category. 2. Methodology Multiple criteria decision making is not an esoteric subject. Irrespective of field, it can be employed to select and prioritize the alternatives in the set. Lot of multiple criteria analysis tools are available for performance evaluation and ranking of alternatives. In this paper we use fuzzy AHP to determine the weights of the evaluation criteria and TOPSIS method for performance evaluation. 2.1 Fuzzy AHP method The analytic hierarchy process is a multi-criteria decision making (MCDM) tool to render subjective judgment on one criteria over another. This tool which is first introduced by T. L. Saaty [1] (Saaty, 1980) works on eigenvalue approach to the pairwise comparison. Though the AHP is very much able to deal with the expert s knowledge and experiences by perception or preference, it still cannot reflect the human thought totally with the crisp numbers. Fuzzy set which is an extension of crisp set deals with ambiguous or imprecise data. It was first introduced by Zadeh in 1965[2]. A fuzzy set is characterized by a membership function which assigns to each object a grade of membership ranging between zero and one. Triangular and trapezoidal fuzzy numbers are normally used to capture the vagueness of the parameters related to select the alternatives. The triangular fuzzy number (TFN) is very simple to use and calculate. It helps in decision making problems where the information available is subjective and imprecise. In practical applications, the triangular form of the membership function is used most often for representing fuzzy numbers [3], [4]that can be defined by a triplet M=(l,m,u), m is the median value of fuzzy number M. l and u are the left and right side of fuzzy number M respectively. Fuzzy scale [5] for pairwise comparisons of one criterion over another is shown in Table 1. This scale is used to develop pairwise comparison matrix. Table 1: Fuzzy Scale In this paper the extent fuzzy AHP is utilized, which was originally introduced by Chang in Let A = (a ij ) mxn be a fuzzy pairwise comparison judgments matrix. Let M ij = (l ij, m ij, u ij ) be a Triangular Fuzzy Number (TFN). The steps of fuzzy AHP are as follows: Step 1: Form the pairwise comparisons of attributes by using the fuzzy numbers, which is composed of low, median and upper value, in the same level of hierarchy structure. Step 2: The value of fuzzy synthetic extent with respect to the i th object is defined as:

80 124 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 125 Step 1: A decision matrix is formed and expressed as D= [Ɵ ij ] Step 2: Calculate the normalized decision matrix R=[r ij ]. The normalized value r ij is calculated as: By the above set of formula the TFN value of Si is calculated. Step 3: We compare the values of Si respectively and calculate the degree of possibility of S j = (l j,m j, u j ) S i = (l i,m i, u i ). That can be equivalently expressed as follows: V(S j S i )=height (S i S j ) = µs j (d)) = 1, if m j m i Step 3: Calculate the weighted normalized decision matrix by multiplying the normalized decision matrix by its associated weights. The weighted normalized value V ij is calculated as: V ij = w j x r ij ; i=1,2, m and j=1,2,..n; (9) Where w j represents the weight of the jth attribute or criterion. Step 4: Determine the positive-ideal and negative-ideal solutions. where d is the ordinate of the highest intersection point D between µs i and µs i (see Figure 1). To compare S i and S j both the values of V(Sj Si) and V(Si Sj) are required. Step 6: Calculate the relative closeness to the positive-ideal solution. The relative closeness to the positive-ideal solution can be defined as: Step 4: We calculate the minimum degree possibility d(i) of V(Sj Si) for i,j=1,2,..,k. V(S S 1, S 2, S 3., S k ), for i=1, 2, 3, k = V[(S S 1 ) and (S S 2 ) and (S S k ) =minv(s S i ) for i =1, 2, 3,.. k (5) Assume that d (A i )= minv(s S i ); for i = 1, 2, 3,. k Then the weight vector is defined as W = (d (A 1 ), d (A 2 ),,d (A n ))T (6) Where Ai (i=1,2,..,n) are the n elements. Step 5: We normalize the weight vectors. That is as follows. W=(d(A 1 ), d(a 2 ).., d(a n )) T (7) Where W is a non-fuzzy number. 2.2 TOPSIS method The TOPSIS (technique for order performance by similarity to ideal solution) was first introduced by Hwang & Yoon (1981). This technique suggests that the best alternative would be the one that is nearest to the positive-ideal solution and farthest from the negative ideal solution. The positive-ideal solution is a solution that maximizes the benefit criteria and minimizes the cost criteria, whereas the negative ideal solution maximizes the cost criteria and minimizes the benefit criteria. In brief, the positive-ideal solution is composed of all best values attainable from the criteria, whereas the negative ideal solution consists of all worst values attainable from the criteria. The general steps of the TOPSIS method are as follows. The higher the closeness means the better the rank. 3. Data and Computation The proposed fuzzy AHP-TOPSIS model for selection of vehicle consists of two basic stages: (i) determination of priority weights of evaluation criteria and (ii) ranking of alternatives using TOPSIS. After identification of evaluation criteria with the help of expert committee, fuzzy linguistic values are used to determine weights of criteria. 3.1 Priority of criteria Considering the feedback of the experts from various fields, weform pairwise comparison matrix of 5 criteria to get their relativeweight over other. Using Eqs. (1)-(7) we determine the final weights of 5 criteria of combined fuel economy, battery range, top speed, acclerating time and price as 0.316, 0.201, 0.278, and respectively. 3.2 Application of FAHP-TOPSIS method for the selection of electric car For the purpose of vehicle selection, the quantitative data used are shown in Table 2. In the table the passenger car companies are considered as the alternatives and these are placed in the rows. The criteria or attributes are placed in columns.

81 126 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 127 Table 2: Quantitative data for selection of alternatives References 1. T. L. Saaty, The analytic hierarchy process (1st ed.). New York: McGraw-Hill, L.A.Zadeh, Fuzzy sets. Information Control, Vol. 8, pp , C. Kahraman, A. Beskese and D. Ruan, Measuring flexibility of computer integrated manufacturing systems using fuzzy cash flow analysis, Information Sciences, Vol. 168, pp.77 94, Xu, Z. S. and J. Chen, An interactive method for fuzzy multiple attribute group decision making, Information Sciences, Vol. 177, pp , D. Y. Chang, Applications of the extent analysis method on fuzzy AHP,European Journal of Operations Research, Vol. 95, pp , Using Eq. 8, we normalize performance scores of the alternatives with respect to the considered attributes, followed by computation of weighted normalization of decision matrix using Eq. 9. The positive and negative ideal solutions (PIS and NIS), which are the best and worst performance scores of the weighted normalised decision matrix are found out. Using PIS and NIS the euclidean distances of each alternatives from the positive-ideal and negative-ideal solutions have been measured using Eq. 12. The relative closeness to the positive-ideal solution for each alternative is calculated using Eq.13. On the basis of closeness value, the ranking and hence the selection is made. The result of the computation is shown is Table 3. Table 3: Results of FAHP-TOPSIS method 4. Conclusions The procedure for passenger car selection deals with finding of the best car among available alternatives in market using of decision making methods. After checking the various process parameters under different circumstances, it is observed that the proposed model is rather simple to use and meaningful for aggregation of the process parameters. Fuzzy AHP-TOPSIS Method is applied to achieve final ranking preferences thus allowing relative performances to be compared. In this paper Tesla models S gets the first tank among all of them. Tesla model S is the fastest acclerating car in production. Battery range is 300 miles, higher than any other electric car. EPA rated its energy consumption for a combined fuel economy of 95 miles per gallon gasoline equivalent. Chevy Bolt fuel economy is rated at 119 miles per gallon gasoline equivalent and achieve a battery range of about 328 miles and cost is not so high. Therefore bolt gets second rank in this paper. However, the outcome of the proposed method is very much sensitive to selection of the criteria. As the proposed Fuzzy AHP-TOPSIS method is simple to understand and less complexity involved, it can be considered as useful and reliable tool for normal decision making.

82 128 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 129 Turning of Inconel 625 with Coated Carbide Insert Kiran Tewari, Akshee Shivam, Santanu Chakraborty, Amit Kumar Roy, B. B. Pradhan Mechanical Department Sikkim Manipal Institute of Technology Majitar, East Sikkim 2.2 Calculation of surface roughness The surface roughness measured using surface roughness tester SJ-210. Table 2: Experimental value of surface roughness Abstract Inconel 625 is a nickel based super alloy which has wide industrial applications due to its prominent strength and hardness and excellent thermo-mechanical properties. The paper presents CNC lathe turning of Inconel 625 using coated carbide insert using spindle speed, feed rate and depth of cut as input parameters. The output parameters MRR (material removal rate) and surface roughness Raare optimized on Minitab software using Taguchi analysis function (L9) and grey relation analysis for multi- objective optimization. The response value is further analyzed using general linear model and ANOVA. Keywords : Depth of cut, Feed rate, GRA, Inconel 625, MRR, Surface roughness,taguchi analysis. 2.3 Calculation of weight reduction of the coated carbide tool Table 3: Calculation of weight reduction 1. Introduction Inconel 625 is a nickel based super alloy comprising of refractory metals, columbium and molybdenum in a nickel based chromium matrix [1]. Alloy 625 has found extensive use inmany industries for diverse applications over a wide temperature range from cryogenic conditions to ultra-hot environments over 1000 C [2].Coated carbide inserts are basically cemented carbides with single or multi-layer coating of TiAlN, TiN, TiCetc [3]. Comparative study of machining with a coated carbide tool and uncoated carbide tool was done to conclude that coated carbide tool shows less adhesion and higher tool wear rate [4]. Study has been conducted on optimization of input parameters of CNC turning operation using Taguchi approach describing the effectiveness of the method [5].Comparisons was conducted on different EN grade materials using two different inserts of coated carbide cutting tools [6]. A study compares the conventional techniques of turning and the ultrasonic assisted turning for machining of Ti and In 625 alloys [7]. Experiments to improve the surface finish quality of IN 625 by using carbide tips was conducted [8]. Coated carbide insert CORO Turn 107 DCMX 11 T3 04 WF 1115 is used in the experiment. In this experiment, CNC turning is done with process parameters feed; spindle speed and depth of cut and responses considered are material removal rate (MRR) and surface roughness. 2. Experimental set up Initial dia. (before turning) is 22.3 mm to get the final dia. (after turning) 20 mmalong-with 60 mm length of work-piece and 15 mm length of cut while turning by using cutting tool on MTAB SDSOR F3. The weight of insert is measured on a digital precision weight measuring instrument (least count g). 3 Analysis of variance (ANOVA) 3.1 Analysis of variance for MRR Table 4: ANOVA table for MRR S= R-Sq=99.68% R-Sq (adj)= 98.73% It is clear that MRR is influenced in the increasing order of cutting speed feed rate and depth of cut. 3.2 Analysis of variance for Surface roughness Table 5: ANOVA table for surface roughnes 2.1 Calculation of material removal rate (MRR) Table1: Experimental value of MRR S= R-Sq=67.86% R-Sq (adj) = 0.00%. The table shows an influence of cutting speed is highest followed by depth of cut and then the feed rate on the surface roughness.

83 130 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Regression Analyses A linear regression model was carried on the response variable MRR and surface roughness with Rpm, feed rate and depth of cut as predictor variable. Table 6: Normalized value of MRR and surface roughness The regression equation of MRR is: MRR = Rpm feed rate depth of cut. R-sq =76.17%, which shows 76.17% variability in predicting new observations. The regression equation of S.R is: S.R = Rpm feed rate depth of cut. R-sq =67.03%, which shows 67% variability in predicting new observations. 5.1 Grey relational coefficient and grey relational grade Table 7: Deviance sequence and Grey relational coefficient Where, x 0 (k) is the reference sequence of the k th quality characteristics. Fig 1: Normal probability plot for MRR Fig 2: Normal probability plot for S.R 5. Grey relational Analysis The normalized data processing for Ra corresponding to smaller the better criterion is expressed as: Δ min and Δ max are respectively the minimum and maximum values of the absolute differences Δ 0i of all the comparing sequences and ξ is the distinguishing coefficient. Table 8: Grey relational grade for L 9 orthogonal array x i (k) =max y i (k)-y i (k)/max y i (k)-min y i (k). [1] The above equation is used to calculate the normalized value of surface roughness. The normalized data processing for MRR corresponds to larger the better criterion can be expressed as: x i (k) = y i (k)-min y i (k)/max y i (k)-min y i (k). [2] The above equation is used to calculate the normalized value of MRR. Where, i=1, 2, 3.., m, m is the number of experimental runs in Taguchi orthogonal array. Here, m=9 (as L 9 orthogonal array is selected). n is the number of process responses. Where i=1, 2, 3 9 (L 9 orthogonal array is selected), ξ i (k) is the grey relational coefficient of the k th response in i th experiment and n is the number of responses. The experiment shows the highest value of the grey relational grade is the best combination for the turning parameters for surface roughness and MRR among all the other experiments. Here, n=2 (MRR and surface roughness). Min.y i (k) is the smallest value of the y i (k) for the k th response. Max y i (k) is the largest value of the y i (k) for the kthresponse. x i (k) is the value after grey relational generation.

84 132 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Analysis of variance for grey relational grade Table 9: ANOVA for Grey relational grade Power Generating Suspension System Shahbaz Chowdhury, Rachit Chaudhary, Utsav Kumar, Akshee Shivam, Santanu Chakraborty, A. P Tiwary, B. B. Pradhan Mechanical Engineering Department Sikkim Manipal Institute of Technology Majitar, East Sikkim. The above table shows the percentage influence in the decreasing order of cutting speed, feed rate followed by depth of cut. 6. Conclusions The experiment concludes that the influence of depth of cut is highest on MRR (larger the better criterion) and the values of 0.25mm depth of cut, 90mm feed and 800 rpm cutting speed is obtain as an optimized input value. The response table and the main effect plot for surface roughness calculated for smaller the better criterion concludes that influence of cutting speed is highest on surface roughness and a value of 900rpm spindle speed, 0.25 mm depth of cut and 70mm feed rate is fixed as an optimized input value. References 1. Special Metals Corporation Products, INCONEL alloy 625, Com/products. 2. T. Charles, International Journal Press, Vessels Piping 59 (1994), pp Bi G., 2013, Micro-structure and mechanical properties of nano-tic reinforced Inconel 625 deposited using LAAM,Physics Procedia 41, pp s. 4. Park K.H et al., 2015, Tool wear analysis on coated and uncoated carbide tools in Inconel machining, International journal of precision engineering and manufacturing, vol.16(7),pp Sinha P.K, 2013, Optimization of parameters on CNC turning operation for the given component using Taguchi approach, International journal of mechanical engineering and technology, vol.4(4)pp Davea H.K et al., Effect of machining conditions on MRR and surface roughness during CNC turning of different using TiN coated cutting tools- A Taguchi approach, International journal of industrial engineering computations. 7. Roy A. et al., 2012, Comparing machinability of Ti and Ni 625 alloys in UAT, Procedia CIRP, Elsevier, pp Raviteja P. et al., 2015, Optimization of process parameters for milling of nickel alloy Inconel 625 by using Taguchi method, International journal of mechanical engineering (SSRG- IJME), vol.2(2). Abstract In the past decade, regenerative braking systems have become increasingly popular form of recovering energy. Most of the current research works are being concentrated on regenerative suspension systems. This technology has the ability to continuously recover a vehicle s vibration energy dissipation that occurs due to road irregularities, vehicle acceleration and braking and use the energy to reduce fuel consumption. This paper concentrates on such technology in the form of a regenerative shock absorber that converts parasitic intermittent linear motion and vibration into useful energy, such as electricity. Conventional shock absorbers simply dissipate this energy as heat. Research works have indicated that this system can improve the stability of the vehicle. It can be concluded from the present study that use of shock absorbers can save fuel between 1.5 and 6%, depending on the vehicle and on the driving conditions. Further research can be carried out to economize the use of shock absorbers in countries with heavy fuel consumption and deficient oil reservoirs. Also there is scope of further improvement of the overall design of the regenerative system. Keywords : Copper coil, Magnet bars, Recovering energy, Regenerative suspension systems, Regenerative shock absorber. 1. Introduction The current research topic of energy saving leads to the idea of development of a shock absorber which can produce electricity apart from absorbing shocks. A regenerative shock absorber converts parasitic intermittent linear motion and vibrations into useful energy whereas conventional shock absorbers simply dissipate this energy as heat [1]. When used in an electric vehicle or hybrid electric vehicle the electricity generated by the shock absorber can be diverted to its powertrain to increase battery life. Finite element analysis on different configurations of linear induction generators would suggest that the actual efficiency would be significantly less than predicted as a result of its ineffective use of high magnetically permeable materials for certain components[2]. Abhijeet Gupta designed electromagnetic shock absorbers provide means for recovering the energy dissipated in shock absorbers. Lie Zuo designed a retro fit regenerative shock absorber which can efficiently recover the vibration energy in a compact space. 2. Experimental setup The model chosen to use is a simple spring based model in which the energy present in the vertical motion of a car can be observed in the compression and extension of its springs. The total energy of all the four wheels is calculated below. Fig 1: Power Generating Suspension

85 134 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 135 The energy in the compressed spring is given by the equation Fig 2: Voltage(y) vs. time(x) graph at x o =0.06m and v o =3.5m/sec Fig 3: Displacement vs. Time (Blue) and Velocity vs. Time (Green) Graph at x o =0.03m This energy can be transformed to equivalent electrical energy. The electromagnetic damping force due to E.M.F in the coils is calculated as follows: The final expression for electromagnetic damping coefficient will be given as: Fig 4: Displacement vs. Time (Blue) and Velocity vs. Time (Green) at x o =0.03m and v o =5.7m/sec 3. Results and discussion Fig 5: Displacement vs. Time (Blue) and Velocity vs. Time (Green) at x o =0.041m and v o =5.7m/sec A fair amount of electricity is generated by the regenerative shock absorbers by converting the amount of wasted energy into electrical energy which can be stored in batteries. The project done was a small prototype with a small magnet and no. of turns of coil approx. 80 turns generating around 8mV. As P EM =c EM x 2 and 4. Conclusions Conversion of energy produced by a vehicle shock absorbers movements into electrical energy, allows a significant fuel saving between 1.5 and 6%, depending on the vehicle and on the driving conditions and improves the stability of the vehicle. Future works may also be done to improve the overall design of the regenerative system. References 1. Goldner R B, Zerigian P and Hull J R, A preliminary study of energy recovery in vehicles by using regenerative magnetic shock absorbers, , ISSN , SAE International. 2. Oly D. Paz,, Design and performance of electromagnetic shock absorber, Gupta A, Jendrzejczyk J A, Mulcahy T M and Hull J R Design of electromagnetic shock absorbers, International Journal of Mechanical Material, Lei Zuo, Brian Scully, Jurgen Shestani and Yu Zhou Design and characterization of an electromagnetic energy harvester for vehicle suspensions. 5. Zhang Jin-qiu et al., A Review on Energy-Regenerative Suspension Systems for Vehicles, Proceeding Congress on Engineering, 2013, Vol. III, July B.L.J. Gysen et al., Efficient Regenerative Direct-Drive Electromagnetic Active Suspension, IEEE Trans. on Vehicular Technology, Vol. 60, No.4, May 2011.

86 136 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 137 Understanding the Physico-chemical Properties of Gadolinium Encased Fullerene Molecule Kunal Biswas 1, Jaya Bandyopadhyay 1, Debashis De 2* 1 Department of Biotechnology, Maulana Abu Kalam Azad University of Technology, West Bengal, Kolkata , India 2* Department of Computer Science and Engineering, Maulana Abul Kalam Azad University of Technology West Bengal, Kolkata , India *Corresponding author : 2 dr.debashis.de@gmail.com Abstract Nanomaterials are a class of molecules which possesses higher attention due to its unique physico-chemical properties. In this study, encapsulation of a pristine gadolinium atom (Gd) was studied within the fullerene nanocage (C 60 ). The change in the electronic properties of Gadolinium encapsulated fullerene nanocage (Gd@C 60 ), compared with the pristine C 60 is observed. The calculations were executed in semi empirical mode using ATK VNL Quantumwise software. Gd@C 60 reveals a reduction in bandgap (E g ) value ~ ev compared to pristine fullerene (C 60 ) molecule of ev. Reduction in Eg for the encapsulated system explains higher conductivity of the system. Transmission spectrum analysis too exhibits a characteristic augmented electronic behavior for Gd@C 60 compared to pristine C 60 molecule. Besides the other electronic study, I-V curve analyses of Gd@C 60 also shows a characteristic ohmic plot around 8000 na applied at ~ 0.8 bias voltage. Pristine C 60 reveals a significant Negative Differential Resistance behavior at the same bias voltage of around 0.8 V signifying lack of metallicity in its structure. Understanding the underlying phenomenon of metalo-carbon based nano scaled materials through simulation provides a platform for devising metal-carbon based nano scaled device, finding applications from opto -electronics to biomedical research. Keywords : Fullerenes (C 60 ) ; gadolinium; bandstructure; transmission spectrum; ATK-VNL. Study of Mechanical and Tribological Properties of Al-Si Alloy with Varying Percentage of Aluminium and Tin Abstract Somnath Das, Kandrap Kumar Singh, Amlan Bhuyan, Akshee Shivam, Santanu Chakraborty, Dr. B B Pradhan Mechanical Engineering Department Sikkim Manipal Institute of Technology Majitar, East Sikkim. Aluminum alloys have been extensively used as materials in transportation, engine components and structural applications. Thus, improving its tribological and mechanical characteristics has been a major research topic over the years. In the present work pin of Al-Si-Cu-Mg-Sn-Zn alloy of different percentage of Tin and Zinc is prepared using powder metallurgy to determine various mechanical and tribological properties such as the variation of density of the alloy samples, the porosity of the samples, the hardness numbers of different p/m samples and the wear behavior of the alloy samples. Composition analysis, density test, hardness test, porosity and wear test of the Al-Si alloy samples are tested i.e. Al-3.4%Sn-0%Zn, Al-3%Sn- 0.4%Zn, Al-2.4%Sn-1%Zn, Al-2%Sn-1.4%Zn, Al-1.5%Sn-1.9%Zn, Al-1.2%Sn-2.2%Zn, Al-0.8%Sn-2.6%Zn, Al-0.2%Sn- 3.2%Zn, Al-0%Sn-3.4%Zn. Keywords : Aluminum, Composition Analysis, Density, Hardness, Porosity, Powder metallurgy, Silicon, Tribological property. 1. Introduction Aluminum (Al) is one of the most plentiful elements on earth which has wide industrial applications [1]. Addition of silicon to aluminum results in high strength to weight ratio, low thermal expansion co-efficient and high wear resistance. Many researchers have studied different aspects of Al-Si or other alloy [2][3]. The wear behavior of both as-cast and heat treated specimens were studied under dry sliding conditions at room temperature using a pin-on- disc type wear testing apparatus[4].in this work, nine Al-Si-Cu-Mg alloy cylindrical pins are produced using powder metallurgy with different compositions (i.e. Al- 3.4%Sn-0%Zn,Al-3%Sn-0.4%Zn, Al-2.4%Sn-1%Zn,Al-2%Sn-1.4%Zn,Al-1.5%Sn-1.9%Zn,Al-1.2%Sn-2.2%Zn,Al0.8%Sn- 2.6%Zn,Al-0.2%Sn-3.2%Zn,Al-0%Sn-3.4%Zn) and their density, fractional porosity, hardness number, wear behavior and microstructure samples are studied. 2. Experimental procedures Here ball milling type mixing method is adopted for proper mixing and the balls were rotated using lathe chuck. The die sets are made of die steel with0.1mm clearance between die and punch. Compaction of different samples is done in the universal testing m/c at different compaction pressures by the help of die and punch with pressure being applied as 100 kn,105 kn and110 kn. Sintering is done in the muffle furnace at 520 C. The heat treatment process is carried out in a vacuum furnace controlled by programming through PID controller.

87 138 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Results and Discussions 3.1 Effect of Composition on Density Table 1: Values of samples Fig 1: Graph depicting S/N ratios of hardness values of samples The hardness test implies that hardness of the alloys increases first, then decreases and then again finally consistently increases with increasing amount of Zinc and decreasing the amount of Tin in the alloys. Thus it can be deduced that with an optimal amount of combination of tin and zinc (more mass % than we used), hardness of the alloy can be improved appreciably. It is evident that the density of alloys decreases first and then again increases with increasing amount of Zinc with decreasing amount of Tin from the density measurement referred table Effect of Composition on Wearing Property Table 4: Wear Test of Samples [6] 3.2 Effect of Composition on Porosity Table 2: Fractional porosity of samples[5] Where, γ = Fractional porosity of powder metallurgy component. γ p = Density of the sintered component. γ s = Density of the solid material. It can be concluded that the porosity of the alloy is inversely proportional to the density of the alloy samples from the result of porosity measurement referred from table Effect of Composition on Hardness Table Hardness measurement for each sample of alloys is carried out in a Rockwell Hardness Testing Machine. Table 3: Hardness of samples Where, M I = initial mass (gm.). Mf = final mass (gm.). t = time of rotation in minute. D = Track diameter (mm). N = rpm. W r = gm. /meter of rotation Sliding Speed = πdn/1000 m/min From the readings, it is understood in higher RPM, wear increases and vice versa. 3.5 Effect of composition on Microstructure Microstructure Analysis is basically done to know the surface finish of the product. The following images show the microstructure of all the nine samples. It was learnt that with increased use of compaction pressure during the compaction process, the surface integrity was better as compared to that of lower applied pressure

88 140 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 141 Development of an AHP-Based Expert System for Non Traditional Machines Selection Khalil Abrar Barkati 1, Shakil Ahmed 2, Anupam Halder 3, Sukanta Kumar Naskar 4 1,2 M.Tech Student of Manufacturing Technology of NITTTR, Kolkata 3 Asst. Professor of Mechanical Engineering of Netaji Subhas Engg College, Kolkata 4 Associate Professor of CDC Department of NITTTR, Kolkata Abstract References 1. Gaber A. et al., Precipitation kinetics of Al 1.12 Mg2Si 0.35 Si and Al 1.07 Mg2Si 0.33 Cu alloys, Journal of Alloys and Compounds, 429, 2007, Alpas A.T., Zhang J., Effect of SiC particulate reinforcement on the dry sliding wear of aluminum-silicon alloys, 1992,Vol. 155,, Issue Haque M.M., Sharif A. /Study on wear properties of aluminum-silicon piston alloy/journal of Materials Processing Technology, 2001, Vol.118, Issues SinglaManoj et al., Journal of Minerals and Materials Characterization and Engineering, Vol. 8(10), 2009, pp Angelo P.C Powder Metallurgy: science, Technology and Applications, 2008, Prentice- Hall of India Private Limited New Delhi Non Traditional Machines are capable of giving us desired accurate and surface finish work piece in manufacturing plant. Selection of the best Non Traditional Machining is a critical, complex and time consuming task due to availability of a wide range of alternatives and conflicting nature of several evaluation criteria. In this paper the adopted AHP-based methodology helps in selecting best Non Traditional Machining for a manufacturing organization. An expert system based on AHPtechnique is developed in Microsoft Excel to make the lengthy and critical calculation easier for the NTM- selection procedure for different manufacturing plant. Keywords : Non Traditional Machining; Multi Criteria Decision Making; Analytical Hierarchy Process 1. Introduction In conventional machining operations, high degree of precision and accuracy cannot be achieved as the materials are removed in the form of chips from the workpiece surface. On the other hand the advanced machining processes use energy (mechanical, thermoelectric, electrochemical, chemical, sound etc.) in its direct form to remove materials in the form of atoms or molecules. By using these machining processes, workpiece with the desired accuracy and surface finish can be obtained. Low applied forces can also prevent damage to the workpiece surface that may occur during conventional machining operations. These advanced machining processes can be used to generate intricate and accurate shapes in materials, like titanium, fiber-reinforced composites, ceramics, refractories and other difficult-to-machine alloys having higher strength, hardness, toughness and other diverse material properties. Advanced machining processes with different capabilities and specifications for a wide range of applications are available today. Hence, there is a need for selecting a suitable advanced machining process for a particular application for effective utilization of its capabilities. Review of the Literature on Advanced Machining Process Selection: Cogun developed a computer-aided selection procedure was presented as a general purpose aid to the designer in making preliminary selections of non-traditional (non-conventional) machining processes for a given component. The selection procedure would use an interactively generated 16-digit classification code to eliminate unsuitable combinations from consideration and rank the remaining machining processes. In that study, only the work materials and some of the important process capabilities, e.g. minimum surface finish, minimum size tolerance, minimum corner radii, minimum taper, minimum hole diameter, maximum hole height-to-diameter ratio and minimum width of cut were used to determine the best selection among the available non-traditional machining processes. Yurdakul and Cogun presented a multi-attribute selection procedure to help the manufacturing personnel in detecting suitable non-traditional machining processes for given application requirements. The selection procedure adopted a combination of ANP and TOPSIS methods to combine different types of qualitative and quantitative measures of performance, such as workpiece material, cost, tolerance, surface finish etc. for the selection procedure. In the first step, feasible processes were short-listed and in the next step, the processes were ranked according to their suitability for the desired application. The applicability of the proposed approach was illustrated using five case studies.

89 142 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 143 Chakraborty and Dey presented a systematic methodology for selecting the best non-traditional machining process under constrained material and machining conditions. An AHP-based expert system with a graphical user interface was designed to ease the decision-making process. The developed expert system had a dynamic database that could be modified according to the required changes in the material and machining conditions. The developed approach wasless susceptible to human error and did not require the decision makers to have any in-depth technological knowledge regarding the available non-traditional machining processes and their capabilities. Chakrabarti et al developed a management information system (MIS) for application in selection of complex machining processes, such as non-traditional machining processes in modern manufacturing industries. A typical machining process has multiple parameters and various relationships may exist among these parameters. The typical problems of parameter selection and optimization in machining processes were considered to design the MIS. An expert system for handling air water jet, electrical discharge and wire electrical discharge machining process data was developed in Visual BASIC. 2. Case Study A Case study of a manufacturing plant dealing with an enormous volume of production. The manufacturing wants to set up a Non Traditional Machines but they are in a confusion which one will be the best for them. Taking some attributes like surface finish, MRR, Cost,Surface damage this paper suggest the Manufacturing plant the best NTM (out of Abrasive jet Machining, Ultra sonic Machining, Chemical machining, Electro discharge machining) with the help of AHP method and used Microsoft Excel for better and faster calculation. 3. Result and Calculation: Non Traditional Machining Surface finish MRR Cost Surface damage AJM USM CHM EDM Weights of WHATs for surface finish: AJM USM CHM EDM AJM USM CHM EDM Normalization of surface finish: vector priority: Weights of WHATs for MRR: AJM USM CHM EDM AJM USM CHM EDM Normalization of MRR: Vector priority Weight for WHATs of cost: AJM USM CHM EDM AJM USM CHM EDM Normalization of cost: priority vector weight for WHATs of Surface Damage AJM USM CHM EDM AJM USM CHM EDM

90 144 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 145 Normalization: criteria. Ranks of the Non-Traditional process are (USM>AJM>CHM>EDM).The non-traditional machining USM is preferable. priority vector: Non Traditional Machining Surface finish MRR Cost Surface damage AJM USM CHM EDM Normalization priority vector: Ranking: * Non Traditional Maching Rank AJM Rank 2 USM Rank1 CHM Rank 3 EDM Rank 4 4. Conclusions Considering processes are Abrasive jet Machining (AJM), Ultra Sonic Machining (USM), Chemical Machining (CHM), and Electrical Discharge Machining (EDM).The attributes considered for the selection included surface damage, surface finish, MRR, and cost. MRR is the beneficial criteria, whereas, surface damage, surface finish and cost are the non-beneficial NTM process Based on the set objectives and the results obtained while using the developed AHP-based selection tool, the following general conclusions can be drawn. a) With the help of a 9-point scale in the house of quality matrix, weights can be manipulated to accommodate the customers requirements and/or technical requirements. b) The software prototype developed in Microsoft Excel automates the decision-making process, thus saving a lot of time. c) An excellent match of the rankings obtained using the AHP-based approach with those derived by the past researchers proves its applicability in providing acceptable and accurate results. d) The AHP-based decision-making tool can be adopted in the manufacturing domain very effectively. e) While adopting the developed tool, the designers or process engineers need not now require to have an in-depth technological knowledge about the capabilities and characteristics of various alternatives and the related criteria. f) While automating the decision-making process, it now helps the decision makers not committing any error in the selection procedure. g) The developed AHP-based approach may guide the decision makers to perform sensitivity analysis to exhibit the effect of various changing criteria on the final ranking of the alternatives. h) It is quite flexible to accommodate any number of alternatives and any number ofcriteria in the decision-making process. Based on this AHP-based approach, a real time expert system may be developed and augmented in the manufacturing organizations to support the decision-making task with an exhaustive dynamic database to make it more universal and acceptable. References 1. Mayyas, A., Shen, Q., Mayyas, A., Mahmoud, A., Shan, D., Qattawi, A. and Omar, M., Using quality function deployment and analytical hierarchy process for material selection of body-in-white, Materials & Design, 32, , Cavallini, C, Giorgetti, A., Citti, P. and Nicolaie, F., Integral aided method for material selection based on quality function deployment and comprehensive VIKOR algorithm, Materials & Design, 47, 27-34, Gupta, N., Material selection for thin-film solar cells using multiple attribute decision making approach, Materials & Design, 32, , 2011.

91 146 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Jee, D-H. and Kang, K-J., A method for optimal material selection aided with decision making theory, Materials & Design, 21, , Ashby, M.F., Brechet, Y.J.M., Cebon, D. and Salvoc, L., Selection strategies for materials and processes, Materials & Design, 25, 51-67, Edwards, K.L., Selecting materials for optimum use in engineering components, Materials & Design, 26, , Shanian, A. and Savadogo, O., TOPSIS multiple-criteria decision support analysis for material selection of metallic bipolar plates for polymer electrolyte fuel cell, Journal of Power Sources, 159, , approach, Expert Systems with Applications, 34, , Onut. S., Kara, S.S. and Efendigil, T., A hybrid fuzzy MCDM approach to machine tool selection, Journal of Intelligent Manufacturing, 19, , Dagdeviren, M., Decision making in equipment selection: an integrated approach withahp and PROMETHEE, Journal of Intelligent Manufacturing, 19, , Yurdakul, M. and Tansel, Y., Analysis of the benefit generated by using fuzzy numbers in a TOPSIS model developed for machine tool selection problems, Journal of Materials Processing Technology, 2 0 9, , Tansel, Y. and Yurdakul, M., Development of a decision support system for machining center selection, Expert Systems with Applications, 36, , Cogun, C., Computer aided preliminary selection of non-traditional machining processes, International Journal of Machine Tools and Manufacture, 34, , Yurdakul, M. and Cogun, C., Development of a multi-attribute selection procedure for non-traditional machining processes, Proceedings of Institution of Mechanical Engineers, Journal of Engineering Manufacture, 217, , Chakraborty, S. and Dey, S., Design of an analytic-hierarchy-process-based expert system for non-traditional machining process selection, International Journal ofadvanced Manufacturing Technology, 31, , Chakladar, N. D., Das, R. and Chakraborty, S., (2009) A digraph-based expert system for non-traditional machining processes selection, International Journal of AdvancedManufacturing Technology, 43, , Abstract Management Aspects of Rice Mill Industries at Jalpaiguri 1 Soupayan Mitra, 2 Arnab Bhattacharya 1 Associate Professor, Mechanical Engineering Deptartment Jalpaiguri Government Engineering College, Jalpaiguri, West Bengal, India. 2 Student, M.Tech Final Year, Mechanical Engineering Department Jalpaiguri Government Engineering College 2 arnab.me12@gmail.com In present investigation rice mill industries of Jalpaiguri Sadar Block has been thoroughly studied by mill visits. Production capacity, operational processes, energy consumption etc. are noted. Present operational and other related problems are identified and suitable management interventional solutions are proposed. This type of study over rice mills at Jalpaiguri has not been heard of. In fact, very little research work has been done on bio-resource based industries except tea in Jalpaiguri. Keywords : Rice mill industry, Husk, Jalpaiguri, Management 1. Introduction Jalpaiguri district is one of the largest districts of West Bengal covering an area of 3044 km². It is situated between and 27 0 North latitudes and 88 4 and East longitudes, which is a northern part of West Bengal. Jalpaiguri is part of monsoon climate zone of South-eastern Asia. Average temperature varies from highest 34 C in May to lowest about 11 C in January. Rainy season persists for about four months. The annual average rainfall is 3160mm. The entire topography is crisscrossed with rivulets, rivers and hills. Veined by mighty rivers like the Teesta, Torsa, Jaldhaka, Raidak, Dyna, Neora, Sankosh etc., this piece of land has been aptly named as the land of Tea, Timber and Tourism. Jalpaiguri lies in the agroclimatic zones partly of Northern Hill Region and partly of Terai-Teesta alluvial zone. The soil of Jalpaiguri is mainly fine loamy to coarse loamy containing good quantity of sand mixed with the soil, this type of soil generally needs more irrigation and frequent watering. Jalpaiguri district is composed of seven Blocks. The district has two sub-divisions, namely Jalpaiguri and Mal. Jalpaiguri Sadar block is under Jalpaiguri sub-division and is mostly surrounding the Jalpaiguri Town. Jalpaiguri Block area is Km² and it has 14Gram Panchayets. According to 2011 Census, Jalpaiguri Block has a total population of 323,445 of which 261,784 are rural and 61,661 are urban (i.e., as per Census, dwellers of Rajganj, Belakoba etc. small towns except Jalpaiguri town). There are 166,016males and 157,409 females. 2. Status of paddy production in Jalpaiguri district Jalpaiguri district produces about 2.5% paddy of total production of West Bengal. The Table-1 depicts the paddy production in different blocks of Jalpaiguri district. In the Table, Rice and Rice husk values are calculated considering that on an average rice weight 63% and husk weight 20% of paddy.

92 148 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 149 Table1 : Paddy cultivation in Jalpaiguri District ( ) Rice husk is used as cattle feed, rural household burning fuel, for steam generation as boiler fuel and also is used as a source of silicates in glass manufacturing industry. Having calorific value of 3000kcal/kg, hush can be used as fuel for power generation and also by gasification process fuel gas can be obtained from husk. Rice bran is used in the manufacture of edible oil. Broken rice is low in economic value than fine rice and is generally used for rice flour, the cattle feed, in liquor industry etc. 3. Visit to rice mills In this present investigation all the six rice mills present in Jalpaiguri Sadar block are visited. For an example, data obtained from one of the rice mills (I.M. Rice Mill of Sadar Block), where 1920 MT of paddy per month processed, is given below : Table 2: Income and Expenditure details (per month basis) 3. Rice Milling Process The objective of the rice milling process is to remove the outside husk layer, bran layer from the paddy rice to produce whole white rice kernels that are sufficiently milled free from impurities and contains minimum broken kernels. The husk layer accounts for 20% of the weight of paddy and helps to protect against insect and fungal attack. When the husk is removed the rice is called as the brown rice. Brown rice consists of Bran layer and the endosperm (kernel i.e., white rice). The degree to which the Bran layer is removed is called the milling degree. From 100% paddy on an average 63% is white rice, 20% is husk, 5% is bran,5% is broken rice are obtained and the rest is waste product, like straw, dust, sand, clay etc. Normally two types of rice are produced in Jalpaiguri block i.e. Atap rice (non-boiled rice) and Boiled rice. Atap rice was earlier processed in single step process where husk and bran were removed in single unit in one go called Huller. Now-adays due to availability of advanced machinery two stage process are used where husk and bran are removed separately. Thus husk and bran can be used or sold separately. Boiled rice is produced in modern rice mills where paddy passes through number of stages starting from pre-cleaning to parboiling and finally white rice is produced. Following Flow diagram depicts the rice production process. In the above rice mill, as on January 2017, rice output is 63%of paddy i.e., 1200 MT per month and is sold at Rs 23 per kg i.e., Rs 2300 per quintal on average. Paddy procurement cost is Rs per kg i.e., Rs 1330 per quintal. Broken rice output is 5% of paddy i.e., 96 MT per month and is sold at Rs 14 per kg. Husk produced is 384 MT i.e., 20% of paddy and 80% of husk produced is consumed by mill boiler itself and rest 20% is sold at Rs 2 per kg. Bran output is 5% of paddy i.e., 96 MT per month and is sold at Rs 17 per kg. 4. Problems faced by Rice Mills 1. Lack of availability of raw material and low utilization of installed capacity It has been observed that most of the rice mills are suffering from low utilization of installed capacity. Many a times, this is due to failure of supply chain. Paddy is not readily available in adequate quantity all the time throughout the year. The reasons mostly relate to seasonal nature of paddy production which is mostly of Aman variety apart from failure of Kishan Mandi and presence of unscrupulous middle man in the supply business. Apart from that a portion of paddy is itself milled by the farmer at his home. Sometimes, all the machines in a particular mill do not match each other. For example, installed boiler capacity may not be sufficient to cater full capacity of the milling machine. However, rice mill owners face no problem in selling milled rice, husk, bran and broken rice. 2. Poor quality electricity, irregular power cuts, and voltage fluctuations The interruption in electric power is a constraint in processing of paddy for rice mills. As reported during survey, power cuts which becomes severe in rainy seasons for storms and frequent voltage fluctuations hamper production process. Modern rice mills use stand-by diesel generators which in turn escalate production cost. Fig 1: Flow diagram of rice production process 3. Lack of technical knowhow There is a scarcity of qualified technical personnel to look after and take care of the modern machines. Normally unskilled workers at low pay scale are employed in the rice mills. At the time of breakdown of machines, mills generally suffer from high shut down time and production loss due to non-availability of sound maintenance personnel. Sometimes inefficient

93 150 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 151 maintenance calls for recurring breakdown of machines. Again, it has been observed that many a times, mills are not run with sufficient efficiency. Lack of sound technical knowhow prevents to run mills at optimal operating parameters. 5. Some possible solutions To remain competitive in the market and for smooth and efficient operation of the mills, following measures may be considered: 1. Preventive maintenance following regular pre-scheduled routine will reduce breakdown numbers and hence, production loss. 2. Operational and safety trainings of unskilled manpower for smooth running of the mills and to avoid accidents. 3. Develop effective market network for smooth and in time supply of required quantity of raw materials. 4. Conduct energy audit to reduce electricity cost. Cost related to electricity and diesel is the second highest cost centre after procurement cost of raw material. 5. Install state-of-the-art machineries in place of old ones. 6. Estimation of power generation from rice husk It has been shown in the Table-1 that about MT of husk is produced yearly in Jalpaiguri Sadar block alone. Considering standard calorific value of rice husk as 3000 kcal/kg and that 500 kwh of electricity is produced from 1 MT of rice husk, it can be shown that kwh of electricity may be generated in Jalpaiguri Sadar block i.e. a power output potential of 1.12 MW is possible from Rice Husk.. Considering Indian average standard of electricity usage as 1010 kwh per capita per year following Central Electricity Authority information, it is estimated that the rice husk produced in Sadar block has the potential to provide electricity to about 10,000 people yearly of the block. 7. Conclusion In the present investigation rice husk production in the Jalpaiguri district with special emphasis to Jalpaiguri Sadar block has been considered. All the six rice mills in the Sadar block are visited as a part of this investigation. Income-expenditure data for one mill is analyzed. The common problems in rice mill operations are identified and possible solution measures are proposed. An estimation is carried out to know the power generation potential of rice husk produced in Jalpaiguri Sadar block. This type of investigation is really a rare one in Jalpaiguri block. References 1. Informationfrom Jalpaiguri District Agriculture office 2. Information from Jalpaiguri District Agri-Marketing Office 3. Data received during visits to rice mills Survey on Crop Disease Analysis using Soft Computing and Image Processing Techniques Kumar Arindam Singh 1, Pritam Dey, Surajit Dhua, Suraj Bernwal, Md. Aftabuddin Sk., Dhiman Mondal 2 Computer Science and Engineering Jalpaiguri Government Engineering College, Jalpaiguri, West Bengal, India. 1 krarin1994@gmail.com, 2 mondal.dhiman@gmail.com Abstract Early disease detection is a major challenge in agriculture. It is beneficial for the social and economical aspects of the environment as well as for the economic aspects of the famers. The recent trends in the information lead to the development and usage of many techniques which can be directly used in the field of agriculture for the purpose of accurate early disease detection. The main objective of the paper is to focus on the area of plant pathology detection and classification only. The proposed algorithm begins with digital image of the infected or non-infected plants being acquired and on which the process of image processing is performed. Then the disease infected region are differentiated from the non-infected regions using methods such as segmentation, feature extraction and neural network based classification. Keywords : Image processing, feature extraction, soft computing, Image acquisition, Image preprocessing, neural network. 1. Introduction The agricultural land mass is more than just being a feeding source in today s world. Indian economy is highly dependent on agricultural productivity. Therefore in field of agriculture, early detection of disease in plants plays an important role. To detect a plant disease in very initial stage, use of automatic disease detection technique is beneficial. In India, farmers suffer a lot when quality and quantity of crops affected due to these diseases and as a result it affects the social as well as economical status of not only the farmers but also the whole country. The existing method for plant disease detection is simply naked eye observation by domain experts through which identification and detection of plant diseases is done. But this is impractical for large field and time consuming. At the same time, in some countries, farmers do not have proper facilities or even idea that they can contact to experts. Due to which consulting experts even cost high as well as time consuming too. In such conditions,[1,2] the suggested technique proves to be beneficial in monitoring large fields of crops. Automatic detection of the diseases by just analyzing the symptoms from images of the plant leaves makes it easier to detect the diseases in an early stage and to prevent crop loss. The image processing technique can be used in agricultural applications for different purposes like, to detect diseases in leaf, stem, fruit, to quantify affected area by disease, to find the shapes of affected areas, to determine color of affected areas [3, 4, 5]. 2. Proposed technique The methodological analysis of the present work is shown in figure 1. The work commences with clicking images using cameras or scanners with a fixed background to reduce the effect of noise. These images are made to undergo some preprocessing steps like filtering and segmentation to get the area of interest. Then different texture and color features are extracted from the processed images. Finally, the feature values are given as inputs to the ANN classifier to classify the given image.

94 152 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 153 The image processing can be used in agricultural applications for following purposes: A. Image Acquisition: The first step is to capture the sample images using high quality camera form a fixed distance with a fixed background. B. Image Pre-processing: In this step the acquired images are pre-processed before feature extraction. First the background of all the captured RGB images is made white to reduce computational complexities. Then the images are pre-processed using different techniques like image enhancement, image segmentation etc. Image segmentation is the process to simplify an image representation into something more meaningful and easier to analyze. It is used to extract the area of interest from the RGB images. C. Image Database: The next step is to create a database that consists of all the images that would be required during training and testing purposes. The construction of an image database is clearly dependent on the application. [7] Such databases consisting of images facilitates better efficiency and simplification of the classifier. D. Feature Extraction: In this step different color, shape and texture based features are extracted. Since the captured images are of different sizes, resizing those images may lead to loss of some valuable information. For this reason all the extracted features are made size invariant. [9] Networks and then the system is trained. In testing phase, the selected feature values taken from the examined images are given as an input to the system for detection and classification. An Artificial Neural Network(ANN) is an information processing paradigm which is based on the working of nervous system in our body. The key element of this paradigm is the novel structure of the information processing system. It is consists of a high number of interconnecting processed elements called neurons, which work in together to complete a specific assigned task. ANNs, like people, learn by example. An ANN can be used to perform specific application, such as pattern recognition or data classification. G. Analysis of the Existing Methods : In this section Table 1 shows the results of different techniques used to detect different diseases along with their experimental results as reported. Table 1: Results of different existing methods 3. Conclusions The contributions of expert systems in the field of agricultural sciences are growing tremendously. To wind up all the information discussed above, it is an accurate technique for automatic detection and classification of plant diseases. From the above discussion it may be noted that using SVM and ANN for classification purposes provides better result with respect to other methods. Image pre-processing, feature extraction and feature selection plays a vital role in improving the performance of the expert systems. The work may be extended using different crops and different diseases of particular crops. References Fig 1: Flow chart of disease detection techniques. E. Dominating Feature Set Selection: Depending on the type of disease and symptoms of the disease a number of features are extracted which may help to detect or classify the disease from the others. But it may happen that some of the extracted features do not contribute in deciding the type of disease or to classify the diseases. Taking those features in to consideration for decision may degrade the performance. For this reason dominating feature set is selected from the set of extracted features. F. Detection & Classification: This method consists of two phases, (i) Training (ii) Testing. The first step is to divide the dataset in to sub-datasets which are individually required during training and testing phase for the developed system. In training phase, the selected feature values of the images are given as input to the classifiers such as Artificial Neural 1. Niket Amoda,,Bharat Jadav, Detection and Classification of Plan Disease,ISSN , International Journal of Computer Science Vol.25,No.3,pp August Otsu, N., A Threshold Selection Method from Gray- Level Histograms, IEEE Transactions on System & Man,Vol. 9.1, 1979, pp Revathi, P., and M. Hemalatha. Classification of cotton leaf spot diseases using image processing edge detection techniques, 2012 International Conference on Emerging Trends in Science Engineering and Technology. 4. Weizheng, S., Yachun, W., Zhanliang, C., and Hongda, W. (2008). Grading Method of Leaf Spot Disease Based on Image Processing. 5. Gudiño, J. Gudiño-Bazaldúa, V.M. Castaño, Colorimage segmentation using perceptual spaces through applets for determining and preventing diseases in chili peppers, African Journal of Biotechnology Vol. 12,no.7, pp , 2013.

95 154 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) J.Yang, C. Liu & L.Zhang, Polar space normalization: Enhancing the discriminating power of polar spaces for face recognition, Pattern Recognit. 43, pp , (2010). 7. Rabia Masood,S.A. Khan, Plant Diesease Segmentation Using Image Processing, I.J. Modern Education and Computer Science,2016,1, Ananthi S,Varthini SV Detection of unhealthy region of plant leaves of plant leaf diseases using texture features. Agric Int: CIGR Journal Vol.1, Mondal, Dhiman, Aruna Chakraborty, Dipak Kumar Kole, and D. Dutta Majumder. Detection and classification technique of Yellow Vein Mosaic Virus disease in okra leaf images using leaf vein extraction and Naive Bayesian classifier, 2015 International Conference on Soft Computing Techniques and Implementations (ICSCTI), Abstract Analyzing User Activities Using Vector Space Model in Online Social Networks Dhrubasish Sarkar 1, Premananda Jana 2 1 School of Management & Allied Courses, Amity University Kolkata 2 Department of CSE, MCKV Institute of Engineering, Liluah, Howrah 1 dhrubasish@inbox.com, 2 prema_jana@yahoo.com The increasing popularity of internet, wireless technologies and mobile devices has led to the birth of mass connectivity and online interaction through Online Social Networks (OSNs) and similar environments. OSN reflects a social structure consist of a set of individuals and different types of ties like connections, relationships, interactions etc among them and helps its users to connect with their friends and common interest groups, share views and to pass information. Now days the users choose OSN sites as a most preferred place for sharing their updates, different views, posting photographs and would like to make it available for others for viewing, rating and making comments. The current paper aims to explore and analyze the association between the objects (like photographs, posts etc) and its viewers (friends, acquaintances etc) for a given user and to find activity relationship among them by using the TF-IDF scheme of Vector Space Model. After vectorization the vector data has been presented through a weighted graph with various properties. Keywords : vector space model, term frequency inverse document frequency, online social network, graph visualization, data mining 1. Introduction Now days, with the immense growth and popularity of internet, wireless technologies and mobile devices, the Online Social Networks (OSNs) and similar applications have rapidly promoted networked interaction environment where people can interact with each other easily and instantly. An OSN reflects a social structure consists of a set of individuals and different types of association or ties among them like connections, interactions, relationship etc [1]. Some of the OSN sites are popular in some countries while some of them are having global reach. Some of the sites are designed to cater the need of a specific interest group while others are general in nature [2]. Generally the OSN sites enable its uses to post their various updates or comments, upload various documents, pictures etc which are available to their friends and acquaintances for viewing, sharing and making comments, express liking etc. Those data mostly are non tabular in nature. These types of unstructured data are first converted to tabular data and then various data mining algorithms can be used to analyze the relationships among them. To convert such non tabular, text data into tabular format, the vectorization process can be used. The proposed model utilizes one such well-known vectorization known as Vector Space Model (VSM) introduced by Salton, Wong and Yang [3]. In this paper, the term frequency - inverse document frequency (TF-IDF) weighting scheme has been used on a set of collected data on user activities as friends or acquaintances and then the processed, tabular data has been presented through a weighted graph using Gephi [8] software tool. The paper is organized as follows. Section II discusses the concepts of Vector Space Model and TF-IDF scheme. Section III discusses Related Works. Section IV discusses the Data Models and experimental results. Last section contains conclusions and future scope. 2. Basic Concepts of Vector Space Model and TF-IDF Scheme The Vector Space Model is a standard and well known method for vectorization in information retrieval introduced by Salton, Wong and Yang [3]. The goal of the VSM is to convert these textual documents to feature vectors [4].

96 156 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 157 D is a given set of documents and each document is a set of words. Document i with vector di can be represented as The document strings are like: d 1 = {UID1, UID2, UID3}, d 2 = {UID3, UID2, UID4}, d 3 = {UID5, UID6, UID4} where w j,i represents the weight for word j that occurs in document i and N is the number of words used for vectorization. To compute w j,i, we can set it to 1 when the word j exist in document i and 0 otherwise. The number of times the word j is observed in document i can also be recorded. A more generalized scheme named term frequency inverse document frequency (TF-IDF) where w j,i is calculated as Where d 1, d 2, d 3 etc are various objected posted by the users and UID1, UID2 etc are the actors. Table 1 shows Hence the tf values. where tf j,i is the frequency of word j in document i. idf j is the inverse frequency of word j across all documents, The idf values are: Table 1 idf UID1 = log 2 (3/1)=1.584, idf UID2 = log2 (3/2)=0.584, idf UID3 = log 2 (3/2) =0.584 which is the logarithm of the total number of documents divided by the number of documents that contain word j. TF-IDF assigns higher weights to words that are less frequent across documents and at the same time have higher frequencies within the document they are used. Also which are common in all documents are assigned smaller weights. 3. Related Works In recent past, social network analysis has drawn the attention of many researchers significantly. In [9], the authors present a technique by using Association Rule Mining to analyze the activities performed by the friends or acquaintances against user s posts. In [5], the authors present a VSM approach for extracting and representing relations from text corpus. The proposed approach uses VSM to represent the weight value of every social object s frequency in every text. It reflects the relationships between social objects and text corpus. The authors conclude that the application of VSM can obtain deeper social relations hid in text and text corpus and increase the effect and efficiency of social network analysis of text corpus. In [6], the authors use Improved Vector Space Model (IVSM) to effectively mine social networks of person entities from Wikipedia where a person entity was represented as a vector by anchor text set and content text set of his page in Wikipedia. In [7], the authors propose a novel method called WR-KMeans based on Extended Vector Space Model which outperforms the traditional k-means and bisecting k-means algorithms. 4. Proposed Model and Experimental Results The OSN sites reflect user profiles and the ties among them. The users profile contains information about the users and their activities. In this paper, primarily we focus on the data generated through the activities of friends or acquaintances on others contents like photographs, posts, updates etc. Our aim is to explore and analyze the association between the objects like photographs, posts etc. and its viewers like friends, acquaintances etc. for a given user and to find activity relationship among them. In this paper we try to capture the activities of the actors like friends, acquaintances etc. based on their actions like viewing, rating, making comments etc. Performed on various posted objects like updates, photographs, documents, videos etc. Posted by a user. As these data are unstructured in nature, we use the TF-IDF weighting scheme of Vector Space Model for vectorization purpose to get tabular data. After vectorization the vector data are being presented in a form of a weighted graph through Gephi software. As we see that TF-IDF assigns higher weights to words that are less frequent across documents and at the same time have higher frequencies within the document they are used. Also which are common in all documents are assigned smaller weights. Likewise the common actors across all the posted objects are assigned smaller weights and the actors who are less frequent across all the posted objects but performed multiple actions (multiple posts, likings, ratings etc) on specific object(s) are assigned higher weights. idf UID4 = log 2 (3/2) =0.584, idf UID5 = log 2 (3/1) =1.584, idf UID6 = log 2 (3/1) =1.584 Now the TF-IDF values are calculated by multiplying the tf values with the the idf values calculated above. Table 2 Table - 2 represents vector data available for various data mining task and visualization. Here UID1 has higher weight for d1 as UID5 and UID6 have for d3. UID3 and UID4 are common for d1 and d2. Higher weight denotes uncommon actor and stronger association. Picture-1 shows the graphical representation of Table-2 using Gephi software where the bold lines indicate higher weights. Gephi produces the following statistics based on the graph: Diameter: 6, Radius: 3, Average Path length: , Number of shortest paths: 72, Average Weighted Degree: 1.835, Graph Density: Picture-1 5. Conclusion and Future Scope In this paper, the TF-IDF weighting scheme of Vector Space Model has been used to analyze user activities and to find association between the objects (like photographs, documents, posts etc) and its viewers (friends, acquaintances etc) for a given user. The unstructured data of user activities has been converted and presented in tabular form. Table-2 shows the TF- IDF values against the activities performed by the various actors on different objects. The tabular data may be processed using various data mining algorithms to further analyze and visualize which may help to find better relationship and association among them.

97 158 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 159 References 1. J. Zhou, Y. Zhang and J. Cheng, Preference-based Mining of Top-K Influential Nodes in Social Networks, Future Generation Computer Systems, Volume 31, Pages 40 47, S. Singh, N. Mishra and S. Sharma, Survey of Various Techniques for Determining Influential Users in Social Networks. In Proceedings of the International Conference on Emerging Trends in Computing, Communication and Nanotechnology (ICE-CCN), pages , G. Salton, A. Wong and C.S. Yang, A vector space model for automatic indexing, Communications of the ACM 18 (1975), no. 11, R. Zafarani, M. A. Abbasi, and H. Liu, Social Media Mining an Introduction, Cambridge University Press, X. Guo, Y. Xiang and Q. Chen, A vector space model approach to social relation extraction from text corpus, 2011 Eighth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Shanghai, 2011, pp Fangfang Yang, Zhiming Xu, Sheng Li, and Xiukun Li, Social network mining based on improved vector space model. In Proceedings of the Second International Conference on Internet Multimedia Computing and Service (ICIMCS 10). ACM, New York, NY, USA, Wang, Le, Yan Jia, and Weihong Han, Instant message clustering based on extended vector space model, International Symposium on Intelligence Computation and Applications, Springer Berlin Heidelberg, accessed on Dhrubasish Sarkar, Dipak K. Kole, Premananda Jana and Aruna Chakraborty, Users Activity Measure in Online Social Networks using Association Rule Mining, IEMCON 2014:5th International Conference on Electronics Engineering and Comp Sc, Kolkata, 2014, pp Fighter Aircraft Selection Using Topsis Method Sagnik Paul 1, Sankhadeep Bose 1, Joydeep Singha 1, Dipika Pramanik 2, Ranajit Midya 3, Dr. Anupam Haldar 3 1 Student, Department of Mechanical Engineering, 2 Department of Information Technology 3 Faculty Department of Mechanical Engineering Netaji Subhash Engineering College, Kolkata Abstract Strategic management and technical decisions not only influence several areas of an organization but in defense application also. Once such decisions have been made, the criteria for implementing following operational decisions must be re-evaluated. Defence purchases amounting several millions of dollars sometimes are spoiled due to non-compliance of achieving required level of output. Sometimes cost does not tally with the performance level and the investment go in vein. So a straightforward selection method covering every level of criteria and sub-criteria is very essential. The aim of this study is originating an approach to assess alternatives of fighter aircrafts based on Technique for Order Preference by Similarity to Ideal Solution method (TOPSIS). The authors, with the help analysing journals of experts in this field of study and their relevant specialized literature, could recognize variables and effective norms in fighter aircraft selection. Fighter aircraft selection is a multicriterion problem which comprises both qualitative and quantitative factors (criteria). The concept permits the pursuit of best criterion of the available alternatives illustrated in a simple mathematical approach. Keywords : Fighter aircraft selection, TOPSIS, Multicriteria Decision Making. 1. Introduction Today, the technology is evolving at an unimaginable pace. With the speeding up of the development in the field of technology, it becomes essential to take decisions more deliberately for the advancement of the technology. The Multiple-criteria decisionmaking (MCDM, which is also known as multiple-criteria decision analysis (MCDA) is a sub-domain of operations research that does a comparative study of numerous conflicting criteria s in a decision making. Conflicting criteria are quite predictable in considering options: cost or price is usually one of the prime criteria, and some measure of quality is ideally another criterion, easily in clash with the cost. In purchasing a fighter aircraft on the basis of the following criteria a) Maximum Speed (Mach Number) (X1); b) Ferry Range (Nautical Miles) (X2); c) Maximum Payload (Lbs.) (X3); d) Acquisition Cost (Million Dollars) (X4); e) Reliability (High-Low) (X5) and f) Maneuverability (High-Low) (X6) are considered to choose the best one for a country s protection. On the basis of these criteria four models of fighter aircrafts such as 1) Sukhoi, 2) HAL Tejas, 3) MiG-29 and 4) Dassult Mirage are evaluated. Haldar et al., (2012) suggested a rather quantitative approach for strategically selecting the supplier. The remainder of the paper is compiled as follows: Section 2 depicts the methodology of this research work i.e., Technique for Order Preference by Similarity to Ideal Solution (TOPSIS); Section 3 depicts description of the problem; Section 4 depicts the method of this research work by solving the problem gradually. And section 5 depicts discussion and conclusion of the work. 2. TOPSIS Method The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a multi-criteria decision analysis method. Hwang and Yoon developed this method in 1981 with further expansion by Yoon in 1987, and Lai and Liu developed new approach in TOPSIS concept basically states that the chosen alternative should be closest (geometrically) from the positive ideal solution (PIS) and the farthest (geometrically) from the negative ideal solution (NIS). This is a method of compensatory aggregation which compares a set of given choices by calculating weights for each criterion, normalizing

98 160 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 161 values for each criterion and computing the distance(geometric) between each alternative and the ideal alternative, which is the best value in each criterion.. g. Normalization is often needed as the parameters are usually of incongruous dimensions in a multi-criteria problems. A presumption of TOPSIS is that the criteria are consistently increasing or decreasing. Compensatory methods such as TOPSIS allow reciprocation between criteria, which generates a far more realistic framework rather than other non-compensatory methods. 3. Description of the problem Defence purchases amounting several millions of dollars sometimes are spoiled due to non-compliance of achieving required level of output. With several considerations taken into account the High-Command have to decide the best alternative within given period of time to produce best output. The Army wishes to select a competent supplier in advance to ensure nominal disturbance, due the occurrence of intermittent complications. In this multi-criteria decision making work, Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is used in determining the selection index for choosing the best fighter aircraft. At first, as the raw data are tabulated and the preferred weight of different suppliers is calculated on the basis of selection criteria using TOPSIS methodology. Secondly, intangible information is converted into tangible one. At the third stage, TOPSIS is applied to determine the ranking of the fighter aircraft. A decision matrix is developed for the opted fighter aircraft. 4. Solution Methodology The MCDM method is a prevalent technique which is widely applied for selecting the best option among several other alternatives having numerous attributes or alternatives. The MCDM problem is described as follows: Find out the suitable solution in multi-criteria approach from the given set of J feasible alternatives which are denoted as A1, A2, AJ, after being evaluated to the set of n criterion functions. The data inputted are the elements f ij of the performance matrix or decision matrix, where f ij is the value of i-th criterion function for the alternative Aj. Also, the corresponding weights of criteria considered as, W = (0.2, 0.1, 0.1, 0.1, 0.2, 0.3). A MCDM problem can be represented by a decision matrix as follows: FIGHTER AIRCRAFT Scale of intangibles (Satty s 9 point scale) SELECTION CRITERIA X1 X2 X3 X4 X5 X6 SUKHOI avg Very high HAL TEJAS low avg MIG high high DASSAULT avg avg Cost attributes Step 1: Formulation of the Decision matrix using numerical scale of intangibles Benefit attributes VERY HIGH 1 VERY LOW HIGH 3 LOW AVERAGE 5 AVERAGE LOW 7 HIGH VERY LOW 9 VERY HIGH X1 X2 X3 X4 X5 X6 SUKHOI HAL-TEJAS MIG DASSAULT Step 2: Fomulation of the normalized decision matrix, N= a ij / (square root of summation of a ij2 ) X1 X2 X3 X4 X5 X6 SUKHOI HAL-TEJAS MIG DASSAULT Step 3: Formultaion of the weighted decision matrix V which can be done by multiplying each column of R by corresponding weight W = ( 0.2, 0.1, 0.1, 0.1, 0.2, 0.3 ) X1 X2 X3 X4 X5 X6 SUKHOI HAL-TEJAS MIG DASSAULT Step 4: Determination of Ideal and Negative- Ideal solution Obtain the ideal (A* ) and the negative ideal (A-) solutions from the weighted decision matrix V. Step 5: Calculating of the separation measures A*= (0.1168, , , , , ) A-= (0.0841, , , , , ) Compute the separation measures from the ideal (Si*) and the negative ideal (S-) solution for all alternatives, i = 1,, m. Si* = square root (sum of squares for j=1,,n of( v ij -v j * )) Si = square root (sum of squares for j=1,.,n of (v ij -v j -)) S 1 * = S 1 - = S 2 * = S 2 - = S 3 * = S 3 - = S 4 * = S 4 - = Step 6: Relative Closeness to ideal solution and preference ranking For each and every alternative determine the comparative closeness to the ideal solution, ( C i *, i= 1,,m) as C i * = S i -/(S i * + S i -) The closeness rating is a number between 0 and 1, with 0 being the worst possible and 1 the best possible solution. C 1 * = 0.643, C 2 * = 0.268, C 3 * = and C 4 * = FIGHTER AIRCRAFT MODELS RANK THE PREFERENCE ORDER SUKHOI 1 HAL TEJAS, 4 MIG-29 2 DASSULT MIRAGE 3

99 162 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Conclusion The dissertation analyzed that how to select the best fighter aircraft considering various criterion using the TOPSIS method. Although many other techniques can expound the problem, the study proposed a method and a procedure to extend the TOPSIS method to solve the problem. The main advantages of using TOPSIS method can be stated as TOPSIS logic is pretty rational and easy to comprehend, The concept permits the pursuit of best alternatives criterion depicted in a simple mathematical projection, The computation processes are hassle-free, and The importance weights are included in comparison procedures. Prior to this, the decision making process to select specialized fighter aircraft from different supplier is of special importance. Acquired results from different numerical example shows that the selected model generates most suitable outcome among the given alternatives in suppliers selection. References 1. Haldar A., Ray A., Banerjee D. and Ghosh S., A Hybrid MCDM Model for Resilient Supplier Selection, International Journal of Management Science and Engineering Management, 7(4): , ISSN: Chris I. E., Bell-Hanyes J., (2010), a model for quantifying strategic supplier selection: evidence from a generic pharmaceutical firm supply chain, international journal of business, marketing, and decision sciences. 3. Mohammady Garfamy R., (2005) supplier selection and business process improvement, doctoral thesis, univesitatautonoma de Barcelona 4. Jiang J., Chen Y.W., Tang D.W., Chen Y.W, (2010), Topsis with belief structure for group belief multiple criteria decision making, international journal of Automation and Computing, vol.7,no.3,pp Parthiban P., Mathiyalagan P., Punniyamoorty M., (2010), optimization of supply chain performance using MCDM tool-a case study, int. J. value chain management, vol 4, no Dabestani R., Shirouyehzad H., (2011) Evaluating Projects Based on Safety Criteria; Using TOPSIS, nd International Conference on Construction and Project Management IPEDR vol Wang YM, Elang TMS, (2006), Fuzzy TOPSIS method based on alpha level sets with an application to bridge risk assessment, Expert Systems, vol.31, pp A., Kalantary M., Toloei Eshlaghy (2011), Supplier selection by Neo-TOPSIS, applied mathematical sciences, vol.5, no. 17, pp Zarbini-Sydani A.H., Atef-Yekta, E Karbasi A.,., (2011), evaluating and selecting supplier in textile industry using hierarchical fuzzy TOPSIS, vol.4, no. 10, Indian Journal of Science and Technology, Abstract Vendor Selection of Vehicle Silencer Abhishek Bharti, Ranajit Midya, Dr. Anupam Haldar Department of Mechanical Engineering Dipika Pramanik Department of Information Technology Netaji Subhash Engineering College Kolkata , India The vendor selection can be very difficult if we do not have right approach. In present time we have developed and efficient multi-criteria decision making (MCDM) approach. It can be used for quality determination and performance justification. It is a multi-criteria decision making (MCDM) problem manipulated by multiple performance criteria. These criteria may be both qualitative as well as quantitative. Qualitative criteria calculation are widely based on previous observation and expert belief and the quality is analyse on a suitable conversion scale (LIKERT SCALE).The whole observation is based on human judgment, so forecasted value may not be exact always because the process doesn t diagnosis real data. In this process quantitative criteriavalues are converted into an equivalent single performance index called Multi-attribute Performance Index (MPI). Selection of the best alternative can be made according to the MPI values of the alternatives. In due time study it implies application of VIKOR method opted from MCDM for utilizing quantitative real performance estimate score. Keywords : VIKOR method, performance appraisal in Vendor selection, Multi-criteria Decision Making. 1. Introduction It is very vital that one should know how to begin the selection process. For having successful vendor we must follow some of basic steps. Firstly, analyse the business needs. Secondly, search for likely vendors, Thirdly, lead the team in selecting the winning vendor and having contract negotiations and Lastlly, avoiding negotiation mistakes. In today s competitive era selection of suitable vendor has become a great interest for various undertaking. Quality and performance judgement are required to select the best one before a mass production is targeted. In most of the case the selection is based on the past observation. This process summarise and reckon the total cost caused by a supplier in production process, therefore it increases the objectivity in the selection. Table 1: Criteria for silencer for a vehicle: vendor selection criteria It includes two primary goals: minimize the rejection and minimize the net late deliveries, considering the constraints regarding buyers demand, vendor capacity, vendor flexibility, cost value of an item. Multiple-criteria decision-making (MCDM) is sub part of operation research that very specifically assess multiple criteria in decision making. Here a vendor selection of silencer in vehicle is explored. The enterprises have their own aim. The aim is to minimize price, less volume, less weight, high insertion loss, less number of components. For price, volume, weight and number of component, LB (lower

100 164 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 165 the better) attribute and for insertion loss HB (higher the better) attribute is considered. Silencer is a mechanical device for reducing the amount of noise emitted by the exhaust of an engine. Insertion loss is the difference in sound power level which measured at one point in the duct before and after the insertion loss of the silencer. It is measured in decibel (db). For vendor selection criteria, attributes for vehicle silencer is tabulated in Table Methodology The MCDM method is very effective technique and widely used in determining the best solution among various different alternatives or attributes. The MCDM process is stated as follows: Determine the best solution in multicriteria sense from the set of J alternatives A 1, A 2, A J assess according to the set of n criterion. The input data are the elements f ij of the decision matrix, where f ij is the value of i-th criterion function for the alternative A j. A MCDM problem can be shown by a decision matrix which is as follows: Normalised decision matrix is calculated using, The minimum and maximum ideal and negative ideal solution of normalised decision matrix of the alternatives are as follows: Here, Ai represents ith alternative, i=1,2,...,m; Cx j represents the jth criterion, j =1,2,...,n ; and x ij is the individual execution of an alternative. The process of solving the best solution to an MCDM problem includes computation of usefulness of alternatives and ranking these alternatives. The alternative with the highest utility measure is the best alternative. Step 1: Representation of normalized decision matrix The normalized decision matrix is as follows: Step 2: Determination of ideal and negative ideal solution Table 4: Utility Measure ( S i ) and regret measure ( R i ) of alternatives The ideal solution A* and the negative ideal solution A- are determined as follows: Step 3: Estimation of utility measure for each alternatives are given as The utility measure and the negative of utility measure for each alternative are given as where, S i and R i, represent the utility measure and the negative of utility (regret) measure respectively, and w j is the weight of individual criterion Step 4: Estimation of VIKOR index: The VIKOR index can be expressed where, Qi, represent the ith alternative VIKOR value, i=1.2,....m; and v is the weight of the maximum group utilization (often it is used as 0.5).The alternatives having smallest vikor value is considered to be the optimum solution. Individual weightage criteria have been taken as 0.2 (equal weight priority). Decision Matrix is shown here 3. Discussion & Conclusions In the present study, VIKOR method has been used to solve multi-criteria decision making problems through a case study of vehicle selection. The process provide the selection of appropriate vendor, for producing vehicle silencer where five alternatives are evaluated on the basis of five attributes. Among five of them Smallest value is considerd to be the best alternative. So vendor C is the best alternative and to be selected. The study reflects the effectiveness of the MCDM techniques how to solve vendor selection problem.

101 166 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 167 References 1. Hub.htm 2. Datta S, Mahapatra SS, Banerjee S and Bandyopadhyay A, Comparative study on application of utility concept and VIKOR method for vendor selection, AIMS International Conference on Value-based Management, organized by Indus Business Academy India, International 23 Forum of Management Scholars, AIMS International, held during August 11-13, 2010 at Dev Sanskriti Vishwavidyalaya, Haridwar, India. 3. Roodhooft F and Konings J, (1996), Vendor selection and evaluation an activity based costing approach, European Journal of Operational Research, 96, Charles A. Weber, John R. Current, Anand Desai, (1998), Theory and Methodology Non-cooperative negotiation strategies for vendor selection, European Journal of Operational Research, 108, Ding H, Benyoucef L and Xie X, (2003), A simulation-optimization approach using genetic search for supplier selection, Proceedings of the 2003 Winter Simulation Conference, S. Chick, P. J. Sánchez, D. Ferrin, and D. J. Morrice, eds. 6. Chih-Hung Tsai, Ching-Liang Chang, and Lieh Chen, (2003), Applying Grey Relational Analysis to the Vendor Evaluation Model, International Journal of the Computer, the Internet and Management, 11(3), Kumar M, Vrat P and Shankar R, (2004), A fuzzy goal programming approach for vendor selection problem in a supply chain, Computers and Industrial Engineering, 46, Heung-Suk Hwang, Chiung Moon, Chun-Ling Chuang and Meng-Jong Goan, (2005), Supplier selection and planning model using AHP, International Journal of the Information Systems for Logistics and Management (IJISLM), 1(1), Bayazita O, Karpakb B, (2005), An AHP application in vendor selection, ISAHP 2005, Honolulu, Hawaii, July 8-10, Sonmez M, (2006), A Review and Critique of Supplier Selection Process and Practices, Business School Occasional Papers Series Paper 2006: 1 ISBN , Loughborough University. 11. Chen KS and Chen KL, (2006), Supplier selection by testing the process incapability index, International Journal of Production Research, 44(3), Abstract Multi Criteria Supplier Selection Using Fuzzy Promethee Aditya Chakraborty 1, Sounak Chattopadhyay 1, Sourav Gupta 1, Ranajit Midya 2, Dr. Anupam Haldar 2, Dipika Pramanik 3, 1 Student Department of Mechanical Engineering 2 Faculty Department of Mechanical Engineering 3 Department of Information Technology Netaji Subhash Engineering College, Kolkata In manufacturing industries one of the most important factors is the supplier selection as it is a vital decision in Supply Chain Management (SCM).With efficient purchasing and amazing participation with suppliers ensures a healthy competition in markets. Now, the vital thing is that supplier choosing and ranking consists of both qualitative and quantitative criteria, thus it is a multi criteria decision making (MCDM) problem. Under partial or unpredictable information, the triangular fuzzy scale helps us in approaching to decisions with approximate reasoning. In order to eliminate the inherent unpredictibility which is present by indefinite situations in supplier selection, the PROMETHEE method is carried out in a fuzzy environment (F-PROMETHEE) has been done. Thus, in our paperwork, supplier selection on multiple criteria based on F-PROMETHEE method with an example for supplier ranking on multiple criteria decision making problem is carried out. 1. Introduction It is the main duty of supply chain management (SCM) to coordinate different suppliers to meet the market demand. Supplier assesment and selection is very important for establishing an efficient supply chain. We can find various publications and many research works in supplier selection using MCDM. The main motto of our research work is to employ a method for multiple criteria supplier selection problem based on fuzzy PROMETHEE which eradicates the unpredictibility constituted by indefinite situations. According to researchers supplier selection and assesment is one of the most vital processes of production and service management, for many sectors within supply chain. Especially, in manufacturing industries the raw materials and product parts constitute around 75 % of the final product cost. 2. PROMETHEE (Preference Ranking Organization METHod for Enrichment Evaluation) PROMETHEE can be elaborated as Preference Ranking Organization METHod for Enrichment Evaluation, which is an outranking method that was first introduced by Prof. Brans in 1982 and was later developed and implemented by Prof Brans and Prof. Mareschal. It is an outranking method that is very easy to understand and application with respect to the other methods for MCDM approach. It is extensively applicable to those problems where there is a finite set of suppliers to be ranked according to various, sometimes clashing criteria. The assesment is the starting point of PROMETHEE method. In this step, alternatives are assessed with respect to various criteria. These assesments involve essentially numerical data. Researchers stated that the implementation of PROMETHEE needs two more types of information, which are as given below: Information on the relative importance (i.e. weights) associated with the criteria considered, Information on the decision-makers preference function, which is used while comparing the Suppliers based on each separate criterion.

102 168 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 169 Now to apply the PROMETHEE method, the following five steps have to be applied which are as follows: The results obtained based on the formulas shall be interpreted as follows: - Establishing the elements of the decision-making problem matrix; - Application of the PROMETHEE I method; - Application of the PROMETHEE II method; - Carrying out the sensitivity analysis; - Ranking the suppliers and selecting the best one. 3. Establishing the elements of the decision-making problem matrix As it is the case with the other multi-criteria methods, a set of suppliers S = {s1, s2,..., sm} are to be assessed based on a set of criteria K = {k1, k2,..., kn}.moreover, one evaluates the performance of the suppliers with respect to each criterion, and then the different criterion are weighted depending upon their relative importance. The weights assigned to multiple decisionmaking criteria are positive, therefore wk > 0, and the sum of the weights assigned to multiple decision-making criteria must n definitely equal to 1(Wikipedia, 2014); where, wk = 1 ; where: w k is the respective weight assigned to criterion k. In this 1 research paper, we shall not present the method for determining the multiple decision-making criterion weights. The elements of this multiple decision-making process are presented in the performance matrix (see table no. 1) as shown below: Table 1: Performance matrix For the case of the outranking index Φ+(si), the supplier bearing the highest value is on the first place and thus the suppliers will be ranked depending on the decreasing value of the outranking index; For the case of the outranked index Φ-(si), the supplier bearing the lowest value is on the first place and therefore the suppliers will be ranked depending on the increasing value of the outranked index. It results that an ideal action would bear a positive value preferably equal to 1 and a negative value preferably equal to 0 (Wikipedia, 2014). 5. PROMETHEE II method In the application of the PROMETHEE II method the selector starts from the results of PROMETHEE I by applying the PROMETHEE I method. The PROMETHEE II method gives a complete ranking, by evaluating, for each si supplier of the S set of suppliers, the net outranking flow Φ(si), as follows: Eqn.3 shows the equivalene between the positive and negative outranking flows for a particular supplier. The higher the net flow is, the better that respective supplier is. This calculation formula ϕ(si) = ϕ+(si) ϕ-(sj) The ranking obtained by applying the PROMETHEE II method shall be interpreted as follows: 1. the si supplier is preferred over the sj supplier, when Φ(si) > Φ(sj), 2. the si supplier and the sj supplier are indifferent, when Φ(si) = Φ(sj) and 3. the sj supplier is preferred over the si supplier, when Φ(sj) > Φ(si) 6. Basis of Fuzzy Systems In many cases, decision making and optimization or maximization are in indefinite environment and a particular mathematical equation does not exist. In such a case, expert decision of the purchasing department and experiments have been considered to solve multi criteria decision making problems. We describe the methodology of PROMETHEE I and II as follows: 4. PROMETHEE I method In the application of PROMETHEE I method, the performances of the possible suppliers need not be compulsarily converted into a common dimensionless scale. PROMETHEE uses the preference function pk(si,sj), which is a function of the difference (dk) between two suppliers for any criteria (k), for instance dk(si,sj) = kh(si) - kh(sj), here kh(si) and kh(sj) are the results of the two si and sj suppliers for any particular criteria (k). The results obtained using the preference function ranges from 0 to 1, namely 0 pk(si, sj) 1. A value equal to 0 shows least preference, while if the value is equal to 1 then it results into an extreme preference for the particular supplier. Depending upon the results of the preference matrix, selector fix upon the outranking index as well as the outranked index, using the formulae mentioned below. A multi-criteria decision-making problem with a set of acceptable suppliers is taken into consideration which is evaluated on a set of criterion. Without any loss of generality, say that all the criterion have to be maximized. A decision-maker or selector expresses his preference of supplier m over supplier n by considering the criterion g j by considering a single-criteria preference function p j (m,n) which is a function of d j (m,n) = g j (m) - g j (n). The result of this preference function p j (m,n) ranges from 0 to 1. If m supplier is better than n supplier, then p j (m,n) >0 ; otherwise, p j (m,n) = 0. Table 2: Triangular Fuzzy Scale where: Φ+(si) is the outranking index for si in the S supplier set; Φˉ(si) - outranked index for si in the S supplier set. The positive outranking flow Φ+(si) shows that an si supplier outranks all the other suppliers, while the negative outranking flow Φˉ(si) shows that a supplier is outranked by all the other suppliers. Thus, in the case of PROMETHEE I method, one gets a partial ranking based on the positive and negative outranking flows.

103 170 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) Problem Solving In our research work, a hypothetical example for supplier assessment problem is performed. The considered six supplier selection criteria are: Capacity (K 1 ), Delivery (K 2 ), Quality (K 3 ), Cost (K 4 ), Warranty policies (K 5 ) and Availability of raw materials (K6). For this application, some weights to each criterion is assigned to individual criteria and calculation is done accordingly. The values of q and p which are the threshold values, are taken as 0 and 0.6, respectively. Four suppliers (S 1, S 2, S 3, S 4 ) are assessed using the determined assessment criteria. Table 3 shows the supplier assessments for each criterion. Table 3: Supplier assessments for each criterion 5. Brans, J.P., Mareschal, B., Vincke P. PROMETHEE: A new family of outranking methods in MCDM. In: Brans J.P. (ed.) Operational Research IFORS 84. North-Holland, Amsterdam, pp (1984) 6. Brans, J.P., Vincle, P.: A preference ranking organization method. Manage. Sci. 31(6), (1985) 7. Brans, J.P., Vincke, P., Mareschal, B.: How to select and how to rank projects: the PROMETHEE method. Eur. J. Oper. Res. 24, (1986) 8. Chen, C.-T., Lin, C.-T., Huang, S.-F.: A fuzzy approach for supplier assessment and selection in supply chain management. Int. J. Prod. Econ. 102, (2006) 9. Chen, Y.-J.: Structured methodology for supplier selection and assessment in a supply chain. Inf. Sci. 181, (2011) Table 4: Negative, positive and net flow values of suppliers 8. Conclusion This research paper uses a F-PROMETHEE method for a supplier selection MCDM problem. The main motto is to select the best supplier, i.e., supplier 1(from the results obtained). The following are the main advantages of this method: user friendliness coming from the simple assessments, and the consideration of the indefiniteness or fuzziness involved to the MCDM problem. Therefore, this method can be an efficient and effective tool to be used by decision makers on supply chain management. The proposed research work can also be applied to any other selection problems. This method can be an efficiently used for car selection, mobile selection, equipment selection, material selection, site selection and more specifically in manufacturing industries and other sectors. References 1. PROMETHEE,Wikipedia 2. Agarwal, P., Sahai, M., Mishra, V., Bag, M., Singh, V.: A review of multi-criteria decision making techniques for supplier assesment and selection. Int. J. Ind. Eng. Comput. 2, (2011) 3. Albadvi, A., Chaharsooghi, S.K., Esfahanipour, A.: Decision making in stock trading: an application of PROMETHEE. Eur. J. Oper. Res. 177(2), (2007) 4. Bilsel, R.U., Buyukozkan, G., Ruan, D.: A fuzzy preference ranking model for a quality assesment of hospital web sites. In. J. Intell. Syst. 21(11), (2006)

104 172 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 173 Gaussian Noise Removal Non-linear Adaptive Filter using Polynomial Equation Pritam Bhattacharya, Sayan Halder, Samrat Chakraborty and Amiya Halder Department of Computer Science & Engineering St. Thomas College of Engineering & Technology Kolkata , India Various noise removal techniques have been proposed such as average filtering, median filtering, adaptive filtering, and fuzzy filtering etc [3-13]. This proposed method is a non-linear, spatial and adaptive filtering technique which gives good results comparable to the previously existing techniques. It is noteworthy to mention the word adaptive as the proposed method can modify itself according to the pixel values in the 3 3 neighborhood. In this paper we proposed a method based on the discrete features of an image using polynomial equation. 2. Proposed method The proposed method starts with finding the maximum and minimum values in the 3 3 neighborhood. If the difference between the maximum and minimum value obtained is less than the threshold value (200), then filtering is performed in the given region otherwise the region is left as it is. Next a floating value is calculated based on the formula given below: Abstract Image enhancement involves its attention towards improving the quality of images. Various methods have been proposed in this regard to improve the physical nature of images. These methods include filtering techniques like average filtering, median filtering, fuzzy filtering, etc. In this paper, we introduced a nonlinear spatial and adaptive filtering technique which removes the Gaussian noise from the images using nth order polynomial equation. Experimental results of the proposed method give the better results than the different existing methods. Keywords : Average filtering; median filtering; fuzzy filtering. 1. Introduction The primary motive of image enhancement is to improve the quality of images. Improving the quality may include various processes like noise reduction, contrast enhancement, sharpening the image, etc [1]. Noise may enter an image through various ways. An image gets noisy with time and the phenomenon is called ageing of images. On the other hand, an image can get noisy in the initial phase itself if the camera or the sensor array has some defects. This initial phase is termed as image acquisition phase. On a broader picture, the process of Image Processing involves both the image acquisition phase as well as the image enhancement phase. While the image acquisition phase centers its attention towards acquiring of images, the image enhancement phase tries to remove the defects in images, if any [2]. The process of image enhancement can be performed in two domains namely frequency domain and spatial domain. While the former includes operations on the Fourier Transform of the image pixel values, the latter involves operations on the pixel values it thereby making it an easier process. The proposed method involves the filtering of images in the spatial domain making it a much easier and simpler method to understand and implement [1,2]. Filtering of images means removal of distortions and ambiguities in an image. A distorted or noisy image is fed as an input and a filtered image with improved quality results as an output. Filtering of images is performed taking into consideration the neighboring pixel values. If we don t consider the neighboring pixel values, the filtering process is termed as point processing. So, we consider a 3 3 mask which runs through the image and performs the desired operation one at a time. These operations involve performing some mathematical operations using the pixel values in the region and replacing the original pixel values with the calculated new pixel value thereby taking a filtered approach towards image enhancement. where x[i] is the ith pixel value in the region 3 3, n is the user input determining the order of filtering. n can only be a non negative integer. Difference matrix is created using the difference value obtained by subtracting from the original pixel value. The pixel value corresponding to the minimum value in the difference matrix replaces every other pixel in the neighborhood. The value of n plays a big role in determining the result of the filtered image. A formal algorithm for the proposed filtering method is given below: - 1. Convert the entire image into a matrix, to store pixel values (X ij ), of order (M N), i.e. the image resolution. 2. If M & N are not multiples of 3, add dummy rows or columns (whichever applicable), to make them so. 3. Divide the thus obtained matrix into smaller matrices of order Consider a 3 3 matrix (P k ) and operate upon this kth matrix, (1 k (M N)/9) as follows: Find the greatest (Max k ) and the smallest (Mink) of the entire given pixel values. If Max k Mink is less than a Threshold value (optimal value of 200 obtained by trial & error) then, find the ( using Eqn. 1) of the pixel values (X ij ) over matrix Pk. Else, consider the next matrix (Pk+1). Construct a difference matrix [i][j] whose value is calculated as [i][j] = X ij. Replace the center of the given 3 3 matrix with the pixel value corresponding to min( [i][j]). 3. Experimental Results This proposed algorithm has been applied on various digital Gaussian noisy images. The algorithm has been implemented using Dev C++ and MATLAB. Fig 1 shows a comparison of the output images with average [1], median [4] & fuzzy filtering [3] and proposed method. It is shown that the proposed algorithm produces better images than the above stated filtering techniques. The quantitative performances in terms of peak-signal-to-noise ratio (PSNR) and root mean square error (RMSE) for all the above mentioned algorithms are shown in Table 1. It can be seen that the level of Gaussian noise is considerably reduced and the visualization of the images is also improved to a great extent in the reconstructed images. The proposed algorithm gives very encouraging output when compared with fuzzy filter and other filters techniques.

105 174 National Conference on Recent Trends in Information Technology and Management (RTITM 2017) National Conference on Recent Trends in Information Technology and Management (RTITM 2017) 175 Fig 1: Comparison of output on Lena image: a) Original Image b) Noisy Image c) using average filter d) using median filter e) using fuzzy filter f) using proposed method. 4. Conclusions In this paper, a new image enhancement method (removal of Gaussian noise) has been introduced. The method is able to reduce the noise, while retaining the sharpness of the images. From the above experimental results, it can be shown that the proposed filtering better than average, median and fuzzy filtering techniques. 5. Tzu-Cheng Jen and Sheng-Jyh Wang, Image Enhancement based on Quadratic Programming, 15th International conference on Image Processing, San Diego, CA, Oct Ehsan Nadernejad, Hamidreza Koohi and Hamid ssanpour PDEs-Based Method for Image Enhancement, Applied Mathematical Sciences, Vol. 2, 2008, no. 20, Prof. Dr. Munther N. Baker, Asst. Prof. Dr. Ali A. Al-Zuky, Asst. Lect. Fatima S. Abdul-Sattar, Colour Image Noise Reduction Using Fuzzy Filtering, Journal of Engineering & Development, Vol. 12, No. 2,June(2008). 8. A. Polesel, G. Ramponi, and V. J. Mathews, Image Enhancement via adaptive unsharp masking, IEEE Trans. Image Processing, vol. 9,pp , Mar A. Beghdadi and A. L. Negrate, Contrast enhancement technique based on local detection of edges, Comput. Vis. Graph. Image Process, vol. 46,pp , S. Peng and L. Lucke, A Hybrid Filter for Image Enhancement, Proc. of ICIP, Vol, 1, and pp: , Oct A. Taguchi, Removal of Mixed Noise by using Fuzzy Rules, Proc. of International Conf. on Knowledge-Based Intelligent Electronic Systems, pp: , Apr Stefan Schulte, Mike Nachpegael, Valerie de Witte, Dietrich van der Weken and Etienne E. Kerre, A Fuzzy Impulse noise detection and reduction method, IEEE Trans. on Image Process, Vol. 15, no. 5, May Pei-Eng Ng and Kai-Kuang Ma, A Switching Median Filter with Boundary Discriminative Noise Detection for Exremely Corrupted Images, IEEE Trans. on Image Process, Vol. 15, no. 6, Jun Jielin Jiang, Lei Zhang, and Jian Yang, Mixed Noise Removal by Weighted Encoding with Sparse Nonlocal Regularization, IEEE transactions on image processing, pp. 1-12, Table 1: Comparison of PSNR and RMSE value of different methods. References 1. R.C. Gonzalez and R.E. Woods, Digital Image Processing, Pearson Education Inc., W.K. Pratt, Digital Image Processing, Wiley, M. Mozammel Hoque Chowdhury, Md. Ezharul Islam, Nasima Begum and Md. Al- Amin Bhuiyan, Digital Image Enhancement with Fuzzy Rule-Based Filtering, 10th International conference on Computer & Information Technology, Dhaka, Dec Kevin Liu, An Implementation of the Median Filter and Its Effectiveness on Different Kinds of Images, Thomas Jefferson High School for Science and Technology Computer Systems Lab , June 13, 2007.

106

107 cygnusadvertising.in

RAMAKRISHNA MISSION SIKSHANAMANDIRA BELUR MATH, HOWRAH List of B.Ed. Students for the session

RAMAKRISHNA MISSION SIKSHANAMANDIRA BELUR MATH, HOWRAH List of B.Ed. Students for the session Name : Arijit Gayen Name : Tanmoy Kolay Date of Birth : 19.02.1986 Date of Birth : 26.04.1987 Method 1 : Sanskrit Method 1 : Sanskrit Contact : 9051447080 Contact : 9733594019 Name : Jitendra Nath Das

More information

Two Days International Workshop on Recent Trends in Computer Vision 0n 30th November

Two Days International Workshop on Recent Trends in Computer Vision 0n 30th November Two Days International Workshop on Recent trends in Computer Vision 30th November 1 st December, 2017 Organized by Centree of Excellence, in association with Russian Federation JIS College of Engineering,

More information

INTERVIEW SCHEDULE GEOGRAPHY 3RD PHASE-ADVT NO 1/2015 9TH MAY, AM 2.00 PM

INTERVIEW SCHEDULE GEOGRAPHY 3RD PHASE-ADVT NO 1/2015 9TH MAY, AM 2.00 PM 9TH MAY, 2016 11.00 AM 1500011460 BISWAJIT MANDAL 1500002488 BISWAJIT MONDAL 1500002998 BISWAJIT MONDAL 1500013591 BISWAJIT MONDAL 1500005869 BISWAJIT NANDY 1500004196 BISWAJIT PAUL 1500006218 BISWAJIT

More information

University of Calcutta

University of Calcutta Date : 16/11/2018 Page 1 of 5 1 182613-22-0001 613-1111-0511-18 D6130571 ASIT DESI HARI PADA DESI BNGG PHIG EDCG 2 182613-22-0002 613-1111-0513-18 D6130573 ROUNAK CHAKRABARTY RAMENDRANATH BNGG SOCG PLSG

More information

INDUSTRY 4.0. Modern massive Data Analysis for Industry 4.0 Industry 4.0 at VŠB-TUO

INDUSTRY 4.0. Modern massive Data Analysis for Industry 4.0 Industry 4.0 at VŠB-TUO INDUSTRY 4.0 Modern massive Data Analysis for Industry 4.0 Industry 4.0 at VŠB-TUO Václav Snášel Faculty of Electrical Engineering and Computer Science VŠB-TUO Czech Republic AGENDA 1. Industry 4.0 2.

More information

Detailed Bio Data of DR. DIPANKAR GHOSH as per AICTE Format

Detailed Bio Data of DR. DIPANKAR GHOSH as per AICTE Format Detailed Bio Data of DR. DIPANKAR GHOSH as per AICTE Format DR. DIPANKAR GHOSH HOD & Associate Professor Date of Joining the Institute: 09/07/2012 B.SC (1 st M.SC (1 st M.TECH (1 st Study And Performance

More information

ARTIFICIAL INTELLIGENCE KNOWLEDGE WORKSHOP 18 MAY 2018

ARTIFICIAL INTELLIGENCE KNOWLEDGE WORKSHOP 18 MAY 2018 ARTIFICIAL INTELLIGENCE KNOWLEDGE WORKSHOP 18 MAY 2018 Government of West Bengal is committed to make the State a leader in emerging technologies such as Artificial Intelligence, Fintech, Blockchain, Cyber

More information

PROFILE REPORT. Tenure Track position Optimization for engineering systems

PROFILE REPORT. Tenure Track position Optimization for engineering systems PROFILE REPORT Tenure Track position Optimization for engineering systems Faculty of Science and Engineering, University of Groningen Engineering and Technology Institute Groningen (ENTEG) Profile report:

More information

ELECTION NOTICE NO. GE 7 / 2016

ELECTION NOTICE NO. GE 7 / 2016 ELECTION NOTICE GE 7 / 2016 Enclosed is the list of candidates who have been elected as General Council Members of the different constituency of ISI Club for the year 2016-2017. The programme for the election

More information

Enabling ICT for. development

Enabling ICT for. development Enabling ICT for development Interview with Dr M-H Carolyn Nguyen, who explains why governments need to start thinking seriously about how to leverage ICT for their development goals, and why an appropriate

More information

Vice Chancellor s introduction

Vice Chancellor s introduction H O R I Z O N 2 0 2 0 2 Vice Chancellor s introduction Since its formation in 1991, the University of South Australia has pursued high aspirations with enthusiasm and success. This journey is ongoing and

More information

Front Digital page Strategy and Leadership

Front Digital page Strategy and Leadership Front Digital page Strategy and Leadership Who am I? Prof. Dr. Bob de Wit What concerns me? - How to best lead a firm - How to design the strategy process - How to best govern a country - How to adapt

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

PROJECT FACT SHEET GREEK-GERMANY CO-FUNDED PROJECT. project proposal to the funding measure

PROJECT FACT SHEET GREEK-GERMANY CO-FUNDED PROJECT. project proposal to the funding measure PROJECT FACT SHEET GREEK-GERMANY CO-FUNDED PROJECT project proposal to the funding measure Greek-German Bilateral Research and Innovation Cooperation Project acronym: SIT4Energy Smart IT for Energy Efficiency

More information

European Circular Economy Stakeholder Conference Brussels, February 2018 Civil Society Perspectives

European Circular Economy Stakeholder Conference Brussels, February 2018 Civil Society Perspectives European Circular Economy Stakeholder Conference Brussels, 20-21 February 2018 Civil Society Perspectives On the 20 th and 21 st February 2018, the European Commission and the European Economic and Social

More information

International Conference on Mechanical, Materials and Renewable Energy

International Conference on Mechanical, Materials and Renewable Energy IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS International Conference on Mechanical, Materials and Renewable Energy To cite this article: 2018 IOP Conf. Ser.: Mater. Sci.

More information

UN-GGIM Future Trends in Geospatial Information Management 1

UN-GGIM Future Trends in Geospatial Information Management 1 UNITED NATIONS SECRETARIAT ESA/STAT/AC.279/P5 Department of Economic and Social Affairs October 2013 Statistics Division English only United Nations Expert Group on the Integration of Statistical and Geospatial

More information

MSc(CompSc) List of courses offered in

MSc(CompSc) List of courses offered in Office of the MSc Programme in Computer Science Department of Computer Science The University of Hong Kong Pokfulam Road, Hong Kong. Tel: (+852) 3917 1828 Fax: (+852) 2547 4442 Email: msccs@cs.hku.hk (The

More information

WFEO STANDING COMMITTEE ON ENGINEERING FOR INNOVATIVE TECHNOLOGY (WFEO-CEIT) STRATEGIC PLAN ( )

WFEO STANDING COMMITTEE ON ENGINEERING FOR INNOVATIVE TECHNOLOGY (WFEO-CEIT) STRATEGIC PLAN ( ) WFEO STANDING COMMITTEE ON ENGINEERING FOR INNOVATIVE TECHNOLOGY (WFEO-CEIT) STRATEGIC PLAN (2016-2019) Hosted by The China Association for Science and Technology March, 2016 WFEO-CEIT STRATEGIC PLAN (2016-2019)

More information

SMART PLACES WHAT. WHY. HOW.

SMART PLACES WHAT. WHY. HOW. SMART PLACES WHAT. WHY. HOW. @adambeckurban @smartcitiesanz We envision a world where digital technology, data, and intelligent design have been harnessed to create smart, sustainable cities with highquality

More information

Two day NADC workshop on Android Application Development. Technical Talk on Digital Universe: Challenges and Opportunities

Two day NADC workshop on Android Application Development. Technical Talk on Digital Universe: Challenges and Opportunities No. Date Topic Invited Speaker 5 th - 6 th February, Two day NADC workshop on Android Application Development ARK Technosoluti on in association with IIT Madras 18 th April, Technical Talk on Digital Universe:

More information

Application of AI Technology to Industrial Revolution

Application of AI Technology to Industrial Revolution Application of AI Technology to Industrial Revolution By Dr. Suchai Thanawastien 1. What is AI? Artificial Intelligence or AI is a branch of computer science that tries to emulate the capabilities of learning,

More information

Usability Experience Design, Cloud Computing and Green Computing: Take a Look of these Emerging Field of Information Sciences

Usability Experience Design, Cloud Computing and Green Computing: Take a Look of these Emerging Field of Information Sciences Asian Journal of Information Science and Technology ISSN: 2231-6108 Vol. 6 No. 1, 2016, pp. 40-44 The Research Publication, www.trp.org.in Usability Experience Design, Cloud Computing and Green Computing:

More information

Panskura Banamali College(Autonomous) East Midnapore Provisional Merit List for U.G Admission, 2018 Subject : Bengali (Hons.

Panskura Banamali College(Autonomous) East Midnapore Provisional Merit List for U.G Admission, 2018 Subject : Bengali (Hons. Panskura Banamali College(Autonomous) East Midnapore-721152 Provisional Merit List for U.G Admission, 2018 Subject : Bengali (Hons.) Category : SC Sl. No. Candidate ID Candidate Full Name Subject Applied

More information

ISBN:

ISBN: International Conference On Emerging Trends in Science, Engineering and Technology ICETSET- 2018 21 st and 22 nd March 2018 Venue: Dr. D Y Patil School of Engineering, Lohegaon, Dr. D Y Patil Knowledge

More information

Institute of Information Systems Hof University

Institute of Information Systems Hof University Institute of Information Systems Hof University Institute of Information Systems Hof University The institute is a competence centre for the application of information systems in companies. It is the bridge

More information

NATIONAL CONSUMER DISPUTES REDRESSAL COMMISSION NEW DELHI NCDRC CIRCUIT BENCH AT KOLKATA, WEST BENGAL AT 10:30 A.M. PRONOUNCEMENT OF ORDER

NATIONAL CONSUMER DISPUTES REDRESSAL COMMISSION NEW DELHI NCDRC CIRCUIT BENCH AT KOLKATA, WEST BENGAL AT 10:30 A.M. PRONOUNCEMENT OF ORDER NATIONAL CONSUMER DISPUTES REDRESSAL COMMISSION NEW DELHI NCDRC CIRCUIT BENCH AT KOLKATA, WEST BENGAL LIST OF BUSINESS FOR TUESDAY THE 20 th JUNE, 2017 AT 10:30 A.M. BEFORE: HON'BLE MR. JUSTICE AJIT BHARIHOKE,

More information

Emerging Transportation Technology Strategic Plan for the St. Louis Region Project Summary June 28, 2017

Emerging Transportation Technology Strategic Plan for the St. Louis Region Project Summary June 28, 2017 Emerging Transportation Technology Strategic Plan for the St. Louis Region Project Summary June 28, 2017 Prepared for: East West Gateway Council of Governments Background. Motivation Process to Create

More information

Research strategy LUND UNIVERSITY

Research strategy LUND UNIVERSITY Research strategy 2017 2021 LUND UNIVERSITY 2 RESEARCH STRATEGY 2017 2021 Foreword 2017 is the first year of Lund University s 10-year strategic plan. Research currently constitutes the majority of the

More information

DEPART OF COMPUTER SCIENCE AND ENGINEERING

DEPART OF COMPUTER SCIENCE AND ENGINEERING DEPART OF COMPUTER SCIENCE AND ENGINEERING Name & Photo : Dr.B.LATHA Designation: Qualification : Area of Specialisation : Professor M.E., Ph.D Soft Computing, Network Security Experience : Teaching :

More information

Factories of the Future 2020 Roadmap. PPP Info Days 9 July 2012 Rikardo Bueno Anirban Majumdar

Factories of the Future 2020 Roadmap. PPP Info Days 9 July 2012 Rikardo Bueno Anirban Majumdar Factories of the Future 2020 Roadmap PPP Info Days 9 July 2012 Rikardo Bueno Anirban Majumdar RD&I roadmap 2014-2020 roadmap will cover R&D and innovation activities guiding principles: industry competitiveness,

More information

B.P. Poddar Institute of Management & Technology Chapte. Annual Report ( )

B.P. Poddar Institute of Management & Technology Chapte. Annual Report ( ) B.P. Poddar Institute of Management & Technology Chapte Annual Report (2017-2018) Chapter activities at B. P. Poddar Institute of Management and Technology student chapter of SPIE (in short SPIE BPPIMT

More information

COURSE 2. Mechanical Engineering at MIT

COURSE 2. Mechanical Engineering at MIT COURSE 2 Mechanical Engineering at MIT The Department of Mechanical Engineering MechE embodies the Massachusetts Institute of Technology s motto mens et manus, mind and hand as well as heart by combining

More information

Cross Linking Research and Education and Entrepreneurship

Cross Linking Research and Education and Entrepreneurship Cross Linking Research and Education and Entrepreneurship MATLAB ACADEMIC CONFERENCE 2016 Ken Dunstan Education Manager, Asia Pacific MathWorks @techcomputing 1 Innovation A pressing challenge Exceptional

More information

Detailed Bio Data of Mr. AMITAVA GHOSHas per AICTE Format

Detailed Bio Data of Mr. AMITAVA GHOSHas per AICTE Format Detailed Bio Data of Mr. AMITAVA GHOSHas per AICTE Format Name of the Faculty: AMITAVA GHOSH Senior Technical Assistant Date of Joining the Institute: 19/08/2009. 1 st class & Control. 9 8 Microprocessor

More information

Strategy 2016-2021 Contents Foreword The Vision and Mission Strategic Objectives Research Education Technologies Translation Promotion FOREWORD Professor Yi-ke Guo, Director, Data Science Institute Big

More information

IEEE IoT Vertical and Topical Summit - Anchorage September 18th-20th, 2017 Anchorage, Alaska. Call for Participation and Proposals

IEEE IoT Vertical and Topical Summit - Anchorage September 18th-20th, 2017 Anchorage, Alaska. Call for Participation and Proposals IEEE IoT Vertical and Topical Summit - Anchorage September 18th-20th, 2017 Anchorage, Alaska Call for Participation and Proposals With its dispersed population, cultural diversity, vast area, varied geography,

More information

The Institution of Engineers (India) In service of the Nation since 1920

The Institution of Engineers (India) In service of the Nation since 1920 The Institution of Engineers (India) In service of the Nation since 1920 98 Years of Relentless Journey towards Engineering Advancement for Nation Building International Conference on Emerging Technologies

More information

CURRICULUM VITAE. Brief Bio data. Sumit Kumar Maji is Assistant Professor at the Department of Commerce of the University

CURRICULUM VITAE. Brief Bio data. Sumit Kumar Maji is Assistant Professor at the Department of Commerce of the University CURRICULUM VITAE Sumit Kumar Maji Assistant Professor Department of Commerce The University of Burdwan Burdwan, West Bengal, India Mob: 09475939809 E-mail id: 2009sumitbu@gmail.com Brief Bio data Sumit

More information

Artificial Intelligence and Expert Systems: Its Emerging Interaction and Importance in Information Science - An overview

Artificial Intelligence and Expert Systems: Its Emerging Interaction and Importance in Information Science - An overview Artificial Intelligence and Expert Systems: Its Emerging Interaction and Importance in Information Science - An overview C. Prantosh Kr. Pau1 l, R Senthamarai 2, K S Shivraj 3, D Chatterjee 4 and B Karn

More information

Message from Honourable Chancellor:

Message from Honourable Chancellor: Message from Honourable Chancellor: Everything is energy and that s all there is to it. Match the frequency of the reality you want and you cannot help but get that reality. It can be no other way. This

More information

Front Digital page Strategy and leadership

Front Digital page Strategy and leadership Front Digital page Strategy and leadership Who am I? Prof. Dr. Bob de Wit What concerns me? - How to best lead a firm - How to design the strategy process - How to best govern a country - How to adapt

More information

Tuning-CALOHEE Assessment Frameworks for the Subject Area of CIVIL ENGINEERING The Tuning-CALOHEE Assessment Frameworks for Civil Engineering offers

Tuning-CALOHEE Assessment Frameworks for the Subject Area of CIVIL ENGINEERING The Tuning-CALOHEE Assessment Frameworks for Civil Engineering offers Tuning-CALOHEE Assessment Frameworks for the Subject Area of CIVIL ENGINEERING The Tuning-CALOHEE Assessment Frameworks for Civil Engineering offers an important and novel tool for understanding, defining

More information

Intelligent Power Economy System (Ipes)

Intelligent Power Economy System (Ipes) American Journal of Engineering Research (AJER) e-issn : 2320-0847 p-issn : 2320-0936 Volume-02, Issue-08, pp-108-114 www.ajer.org Research Paper Open Access Intelligent Power Economy System (Ipes) Salman

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

Programme hrs Registration hrs Inaugural Session

Programme hrs Registration hrs Inaugural Session Programme 0900 1000 hrs Registration 1000 1100 hrs Inaugural Session 1000 1005 hrs Welcome Address Sunil Mathur, Chairman, CII Smart Manufacturing Council 1005 1010 hrs Address Chandrajit Banerjee, Director

More information

Our digital future. SEPA online. Facilitating effective engagement. Enabling business excellence. Sharing environmental information

Our digital future. SEPA online. Facilitating effective engagement. Enabling business excellence. Sharing environmental information Our digital future SEPA online Facilitating effective engagement Sharing environmental information Enabling business excellence Foreword Dr David Pirie Executive Director Digital technologies are changing

More information

Brief to the. Senate Standing Committee on Social Affairs, Science and Technology. Dr. Eliot A. Phillipson President and CEO

Brief to the. Senate Standing Committee on Social Affairs, Science and Technology. Dr. Eliot A. Phillipson President and CEO Brief to the Senate Standing Committee on Social Affairs, Science and Technology Dr. Eliot A. Phillipson President and CEO June 14, 2010 Table of Contents Role of the Canada Foundation for Innovation (CFI)...1

More information

Whitepaper. Lighting meets Artificial Intelligence (AI) - a way towards better lighting. By Lars Hellström & Henri Juslén at Helvar helvar.

Whitepaper. Lighting meets Artificial Intelligence (AI) - a way towards better lighting. By Lars Hellström & Henri Juslén at Helvar helvar. Whitepaper Lighting meets Artificial Intelligence (AI) - a way towards better lighting By Lars Hellström & Henri Juslén at Helvar helvar.com Introduction Artificial Intelligence is developing at a very

More information

Role of Knowledge Economics as a Driving Force in Global World

Role of Knowledge Economics as a Driving Force in Global World American International Journal of Research in Humanities, Arts and Social Sciences Available online at http://www.iasir.net ISSN (Print): 2328-3734, ISSN (Online): 2328-3696, ISSN (CD-ROM): 2328-3688 AIJRHASS

More information

Over the 10-year span of this strategy, priorities will be identified under each area of focus through successive annual planning cycles.

Over the 10-year span of this strategy, priorities will be identified under each area of focus through successive annual planning cycles. Contents Preface... 3 Purpose... 4 Vision... 5 The Records building the archives of Canadians for Canadians, and for the world... 5 The People engaging all with an interest in archives... 6 The Capacity

More information

Presents 3 RD WEST BENGAL STATE FIDE RATED RAPID CHESS CHAMPIONSHIP 2018

Presents 3 RD WEST BENGAL STATE FIDE RATED RAPID CHESS CHAMPIONSHIP 2018 Presents 3 RD WEST BENGAL STATE FIDE RATED RAPID CHESS CHAMPIONSHIP 2018 Event Code: 184028/WB(S) R/2018 14 th April, 2018 16 th April, 2018 Venue : COLLEGE OF ENGINEERING AND MANAGEMENT, KOLAGHAT Township

More information

SUSTAINABILITY OF RESEARCH CENTRES IN RELATION TO GENERAL AND ACTUAL RISKS

SUSTAINABILITY OF RESEARCH CENTRES IN RELATION TO GENERAL AND ACTUAL RISKS SUSTAINABILITY OF RESEARCH CENTRES IN RELATION TO GENERAL AND ACTUAL RISKS Branislav Hadzima, Associate Professor Stefan Sedivy, PhD., MSc. Lubomír Pepucha, PhD., MSc. Ingrid Zuziaková,MSc. University

More information

A Report 1 st International Conference on Recent Advances in Information Technology (RAIT- 2012)

A Report 1 st International Conference on Recent Advances in Information Technology (RAIT- 2012) A Report 1 st International Conference on Recent Advances in Information Technology (RAIT- 2012) The Department of Computer Science & Engineering of Indian School of Mines, Dhanbad organized the 1 st International

More information

Dr. Pinaki Chakraborty. Dr. A. K. Atta. Dr. Nabakumar Pramanik. Anish Kumar Saha. List of Publications of ( ) Journals

Dr. Pinaki Chakraborty. Dr. A. K. Atta. Dr. Nabakumar Pramanik. Anish Kumar Saha. List of Publications of ( ) Journals List of Publications of (2012-2013) Journals Dr. Pinaki Chakraborty Dr. A. K. Atta 1. Atta, A. K.; Kim, Seul-bee.; Heo, J.; Cho, D-G. Hg(II)-Mediated Intramolecular Cyclization Reaction in Aqueous Media

More information

S3P AGRI-FOOD Updates and next steps. Thematic Partnership TRACEABILITY AND BIG DATA Andalusia

S3P AGRI-FOOD Updates and next steps. Thematic Partnership TRACEABILITY AND BIG DATA Andalusia S3P AGRI-FOOD Updates and next steps Thematic Partnership TRACEABILITY AND BIG DATA Andalusia judit.anda@juntadeandalucia.es internacional.viceconsejeria.capder@juntadeandalucia.es Agro food Digital Innovation

More information

Climate Change Innovation and Technology Framework 2017

Climate Change Innovation and Technology Framework 2017 Climate Change Innovation and Technology Framework 2017 Advancing Alberta s environmental performance and diversification through investments in innovation and technology Table of Contents 2 Message from

More information

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration Research Supervisor: Minoru Etoh (Professor, Open and Transdisciplinary Research Initiatives, Osaka University)

More information

TRANSFORMATION INTO A KNOWLEDGE-BASED ECONOMY: THE MALAYSIAN EXPERIENCE

TRANSFORMATION INTO A KNOWLEDGE-BASED ECONOMY: THE MALAYSIAN EXPERIENCE TRANSFORMATION INTO A KNOWLEDGE-BASED ECONOMY: THE MALAYSIAN EXPERIENCE by Honourable Dato Sri Dr. Jamaludin Mohd Jarjis Minister of Science, Technology and Innovation of Malaysia Going Global: The Challenges

More information

Presents 3 RD WEST BENGAL STATE BLITZ CHESS CHAMPIONSHIP 2018

Presents 3 RD WEST BENGAL STATE BLITZ CHESS CHAMPIONSHIP 2018 Presents 3 RD WEST BENGAL STATE BLITZ CHESS CHAMPIONSHIP 2018 16 th April, 2018 Venue : COLLEGE OF ENGINEERING AND MANAGEMENT, KOLAGHAT Township Kolaghat Thermal Power Station, Pin - 721171 Organised by

More information

The Key to the Internet-of-Things: Conquering Complexity One Step at a Time

The Key to the Internet-of-Things: Conquering Complexity One Step at a Time The Key to the Internet-of-Things: Conquering Complexity One Step at a Time at IEEE QRS2017 Prague, CZ June 19, 2017 Adam T. Drobot Wayne, PA 19087 Outline What is IoT? Where is IoT in its evolution? A

More information

-Dr. J. B. Patil, Campus Director, Universal College of Engineering

-Dr. J. B. Patil, Campus Director, Universal College of Engineering FROM CAMPUS DIRECTOR S DESK The evolution of the institute over the past six years has witnessed strong blend of state-of-the-art infrastructure and intricately intertwined human resource committed to

More information

UNIVERSITY OF CALCUTTA FACULTY ACADEMIC PROFILE

UNIVERSITY OF CALCUTTA FACULTY ACADEMIC PROFILE UNIVERSITY OF CALCUTTA FACULTY ACADEMIC PROFILE Full name of the faculty member: KAUSHIK DAS SHARMA Designation: ASSOCIATE PROFESSOR Specialization: Control System Date of Joining the University: December

More information

Ph.D : Technology & Science Fellowship available for meritorious Full-Time, candidates GATE / NET qualified Rs.

Ph.D : Technology & Science Fellowship available for meritorious Full-Time, candidates GATE / NET qualified Rs. QS Ranking NAAC Accredited A Emerging Researchers with creative dynamism for promising innovations are invited to join Hindustan, a blossoming Deemed University with a rich heritage and commitment for

More information

National Conference on Science, Technology and Communication Skills (NCSTCS 2K18), 21st April, 2018, Narula Institute of Technology.

National Conference on Science, Technology and Communication Skills (NCSTCS 2K18), 21st April, 2018, Narula Institute of Technology. Name of faculty: Dr. Debjani Chakraboti Designation: Assistant Professor Contact Details: 9432137013 Qualification: M.Sc, M.Phil, MBA, Ph.D Research Experience: 11 Years Seminar/Conference Attended: National

More information

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 1: Intro

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 1: Intro COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 1: Intro Sanjeev Arora Elad Hazan Today s Agenda Defining intelligence and AI state-of-the-art, goals Course outline AI by introspection

More information

School of Informatics Director of Commercialisation and Industry Engagement

School of Informatics Director of Commercialisation and Industry Engagement School of Informatics Director of Commercialisation and Industry Engagement January 2017 Contents 1. Our Vision 2. The School of Informatics 3. The University of Edinburgh - Mission Statement 4. The Role

More information

WORKSHOP ON BASIC RESEARCH: POLICY RELEVANT DEFINITIONS AND MEASUREMENT ISSUES PAPER. Holmenkollen Park Hotel, Oslo, Norway October 2001

WORKSHOP ON BASIC RESEARCH: POLICY RELEVANT DEFINITIONS AND MEASUREMENT ISSUES PAPER. Holmenkollen Park Hotel, Oslo, Norway October 2001 WORKSHOP ON BASIC RESEARCH: POLICY RELEVANT DEFINITIONS AND MEASUREMENT ISSUES PAPER Holmenkollen Park Hotel, Oslo, Norway 29-30 October 2001 Background 1. In their conclusions to the CSTP (Committee for

More information

The A.I. Revolution Begins With Augmented Intelligence. White Paper January 2018

The A.I. Revolution Begins With Augmented Intelligence. White Paper January 2018 White Paper January 2018 The A.I. Revolution Begins With Augmented Intelligence Steve Davis, Chief Technology Officer Aimee Lessard, Chief Analytics Officer 53% of companies believe that augmented intelligence

More information

Convergence, Grand Challenges, Team Science, and Inclusion

Convergence, Grand Challenges, Team Science, and Inclusion Convergence, Grand Challenges, Team Science, and Inclusion NSF EFRI Workshop Convergence and Interdisciplinarity in Advancing Larger Scale Research May 14, 2018 Pramod P. Khargonekar University of California,

More information

TEAM AAVEGA NATIONAL INSTITUTE OF TECHNOLOGY, PATNA

TEAM AAVEGA NATIONAL INSTITUTE OF TECHNOLOGY, PATNA TEAM AAVEGA NATIONAL INSTITUTE OF TECHNOLOGY, PATNA ABOUT OUR COLLEGE : NATIONAL INSTITUTE OF TECHNOLOGY PATNA is the 18 th National Institute Of Technology created by the M.H.R.D Govt. of India after

More information

33 rd Indian Engineering Congress

33 rd Indian Engineering Congress The Institution of Engineers (India) In service of the Nation since 1920 33 rd Indian Engineering Congress December 21-23, 2018, Udaipur Hosted by: Udaipur Local Centre Venue : Udaipur Theme Integration

More information

April 2015 newsletter. Efficient Energy Planning #3

April 2015 newsletter. Efficient Energy Planning #3 STEEP (Systems Thinking for Efficient Energy Planning) is an innovative European project delivered in a partnership between the three cities of San Sebastian (Spain), Bristol (UK) and Florence (Italy).

More information

Analysis of Footprint in a Crime Scene

Analysis of Footprint in a Crime Scene Abstract Research Journal of Forensic Sciences E-ISSN 2321 1792 Analysis of Footprint in a Crime Scene Samir Kumar Bandyopadhyay, Nabanita Basu and Sayantan Bag, Sayantan Das Department of Computer Science

More information

CHAPTER-5. Suggestions and Conclusion

CHAPTER-5. Suggestions and Conclusion CHAPTER-5 Suggestions and Conclusion 5.1 Introduction In mankind s quest for acquiring, utilizing and propagating knowledge, eresources has been the lifeblood of scholarly communication. In the emerging

More information

On line admission. Personal Part : Upper Case automatically. General/ST/SC/OBC-A/OBC-B. 8. Physically

On line admission. Personal Part : Upper Case automatically. General/ST/SC/OBC-A/OBC-B. 8. Physically On line admission Personal Part 1. Name 2. Date of Birth 3. Nationality 4. Religion 5. Minority 6. Gender 7. Caste 8. Physically Handicapped(PWD) 9. Contact No. 10. Email ID if any 11. Father s Name 12.

More information

FUZZY EXPERT SYSTEM FOR DIABETES USING REINFORCED FUZZY ASSESSMENT MECHANISMS M.KALPANA

FUZZY EXPERT SYSTEM FOR DIABETES USING REINFORCED FUZZY ASSESSMENT MECHANISMS M.KALPANA FUZZY EXPERT SYSTEM FOR DIABETES USING REINFORCED FUZZY ASSESSMENT MECHANISMS Thesis Submitted to the BHARATHIAR UNIVERSITY in partial fulfillment of the requirements for the award of the Degree of DOCTOR

More information

*Author for Correspondence. Keywords: Technology, Technology capability, Technology assessment, Technology Needs Assessment (TNA) model

*Author for Correspondence. Keywords: Technology, Technology capability, Technology assessment, Technology Needs Assessment (TNA) model MEASUREMENT AND ANALYSIS OF TECHNOLOGICAL CAPABILITIES IN THE DRILLING INDUSTRY USING TECHNOLOGY NEEDS ASSESSMENT MODEL (CASE STUDY: NATIONAL IRANIAN DRILLING COMPANY) * Abdolaziz Saedi Nia 1 1 PhD Student

More information

in the New Zealand Curriculum

in the New Zealand Curriculum Technology in the New Zealand Curriculum We ve revised the Technology learning area to strengthen the positioning of digital technologies in the New Zealand Curriculum. The goal of this change is to ensure

More information

Recommender Systems TIETS43 Collaborative Filtering

Recommender Systems TIETS43 Collaborative Filtering + Recommender Systems TIETS43 Collaborative Filtering Fall 2017 Kostas Stefanidis kostas.stefanidis@uta.fi https://coursepages.uta.fi/tiets43/ selection Amazon generates 35% of their sales through recommendations

More information

CONSENT IN THE TIME OF BIG DATA. Richard Austin February 1, 2017

CONSENT IN THE TIME OF BIG DATA. Richard Austin February 1, 2017 CONSENT IN THE TIME OF BIG DATA Richard Austin February 1, 2017 1 Agenda 1. Introduction 2. The Big Data Lifecycle 3. Privacy Protection The Existing Landscape 4. The Appropriate Response? 22 1. Introduction

More information

Assessment of Smart Machines and Manufacturing Competence Centre (SMACC) Scientific Advisory Board Site Visit April 2018.

Assessment of Smart Machines and Manufacturing Competence Centre (SMACC) Scientific Advisory Board Site Visit April 2018. Assessment of Smart Machines and Manufacturing Competence Centre (SMACC) Scientific Advisory Board Site Visit 25-27 April 2018 Assessment Report 1. Scientific ambition, quality and impact Rating: 3.5 The

More information

Framework Programme 7

Framework Programme 7 Framework Programme 7 1 Joining the EU programmes as a Belarusian 1. Introduction to the Framework Programme 7 2. Focus on evaluation issues + exercise 3. Strategies for Belarusian organisations + exercise

More information

NATIONAL CONSUMER DISPUTES REDRESSAL COMMISSION NEW DELHI NCDRC CIRCUIT BENCH AT KOLKATA, WEST BENGAL

NATIONAL CONSUMER DISPUTES REDRESSAL COMMISSION NEW DELHI NCDRC CIRCUIT BENCH AT KOLKATA, WEST BENGAL NATIONAL CONSUMER DISPUTES REDRESSAL COMMISSION NEW DELHI NCDRC CIRCUIT BENCH AT KOLKATA, WEST BENGAL LIST OF BUSINESS FOR FRIDAY THE 07 th DECEMBER, 2018 AT 10:30 A.M. BEFORE: HON'BLE MR. JUSTICE R.K.

More information

WHEREAS, UCMERI requires additional financial support to sustain its operations; and

WHEREAS, UCMERI requires additional financial support to sustain its operations; and PARTICIPATION AGREEMENT between THE REGENTS OF THE UNIVERSITY OF CALIFORNIA acting through THE MERCED CAMPUS OF THE UNIVERSITY OF CALIFORNIA on behalf of THE UC MERCED ENERGY RESEARCH INSTITUTE (UCMERI)

More information

Vienna Declaration: The most needed social innovations and related research topics

Vienna Declaration: The most needed social innovations and related research topics Vienna Declaration: The most needed social innovations and related research topics 1. Rationale of the Declaration In response to major societal challenges the Europe 2020 strategy sets measurable targets

More information

Towards a Consumer-Driven Energy System

Towards a Consumer-Driven Energy System IEA Committee on Energy Research and Technology EXPERTS GROUP ON R&D PRIORITY-SETTING AND EVALUATION Towards a Consumer-Driven Energy System Understanding Human Behaviour Workshop Summary 12-13 October

More information

University of Strathclyde. Gender Pay and Equal Pay Report. April 2017

University of Strathclyde. Gender Pay and Equal Pay Report. April 2017 University of Strathclyde Gender Pay and Equal Pay Report April 2017 EXECUTIVE SUMMARY The University of Strathclyde is committed to the principle of equal pay for equal work for all of its staff. We have

More information

Insight into the Community Science and its Interaction with Information Science and Technology: A Socio-Techno Perspective

Insight into the Community Science and its Interaction with Information Science and Technology: A Socio-Techno Perspective International Journal of Information Science and Computing 3(2): December, 2016: p. 78-79 DOI : 10.5958/2454-9533.2016.00009.0 Insight into the Community Science and its Interaction with Information Science

More information

DR. M. N. CHATTERJEE DAY MOC : DR. NIRMALYA MANNA, DR. SANJIB BANDYOPADHYAY & DR. SUJOY DASGUPTA

DR. M. N. CHATTERJEE DAY MOC : DR. NIRMALYA MANNA, DR. SANJIB BANDYOPADHYAY & DR. SUJOY DASGUPTA 30.01.2017 - DR. M. N. CHATTERJEE DAY MOC : DR. NIRMALYA MANNA, DR. SANJIB BANDYOPADHYAY & DR. SUJOY DASGUPTA GENERAL SURGERY CHAIRPERSON : DR. DEBASHIS BHATTACHARYA TIME: 10:00 10:15 SPEAKER : DR. SANDEEP

More information

Horizon 2020 Towards a Common Strategic Framework for EU Research and Innovation Funding

Horizon 2020 Towards a Common Strategic Framework for EU Research and Innovation Funding Horizon 2020 Towards a Common Strategic Framework for EU Research and Innovation Funding Rudolf Strohmeier DG Research & Innovation The context: Europe 2020 strategy Objectives of smart, sustainable and

More information

Classification Experiments for Number Plate Recognition Data Set Using Weka

Classification Experiments for Number Plate Recognition Data Set Using Weka Classification Experiments for Number Plate Recognition Data Set Using Weka Atul Kumar 1, Sunila Godara 2 1 Department of Computer Science and Engineering Guru Jambheshwar University of Science and Technology

More information

THE METHODOLOGY: STATUS AND OBJECTIVES THE PILOT PROJECT B

THE METHODOLOGY: STATUS AND OBJECTIVES THE PILOT PROJECT B Contents The methodology: status and objectives 3 The pilot project B 3 Definition of the overall matrix 4 The starting phases: setting up the framework for the pilot project 4 1) Constitution of the local

More information

EVCA Strategic Priorities

EVCA Strategic Priorities EVCA Strategic Priorities EVCA Strategic Priorities The following document identifies the strategic priorities for the European Private Equity and Venture Capital Association (EVCA) over the next three

More information

The function is assumed by technology management, usually the Technological Development Committee.

The function is assumed by technology management, usually the Technological Development Committee. Integrated Report 6.8 Innovation 167 The ACS Group is a continuously evolving organisation that responds to the growing demand for improvements in processes, technological advances and quality of service

More information

GamECAR JULY ULY Meetings. 5 Toward the future. 5 Consortium. E Stay updated

GamECAR JULY ULY Meetings. 5 Toward the future. 5 Consortium. E Stay updated NEWSLETTER 1 ULY 2017 JULY The project engine has started and there is a long way to go, but we aim at consuming as less gas as possible! It will be a game, but a serious one. Playing it for real, while

More information

EuropeAid. Sustainable and Cleaner Production in the Manufacturing Industries of Pakistan (SCI-Pak)

EuropeAid. Sustainable and Cleaner Production in the Manufacturing Industries of Pakistan (SCI-Pak) Sustainable and Cleaner Production in the Manufacturing Industries of Pakistan (SCI-Pak) Switch Asia 2008 Target Country Pakistan Implementation period 1.03.2008-29.02.2012 EC co-financing 1126873 Lead

More information

India Robotics Roadmap

India Robotics Roadmap India Robotics Roadmap Industry Connections Activity Initiation Document (ICAID) Version: 0.2, 8 February 2017 IC17-003-01 Approved by the IEEE-SASB 4 May 2017 Instructions Instructions on how to fill

More information

ECU Research Commercialisation

ECU Research Commercialisation The Framework This framework describes the principles, elements and organisational characteristics that define the commercialisation function and its place and priority within ECU. Firstly, care has been

More information

Workshop on Enabling Technologies in CSF for EU Research and Innovation Funding

Workshop on Enabling Technologies in CSF for EU Research and Innovation Funding Workshop on Enabling Technologies in CSF for EU Research and Innovation Funding Rapporteur Professor Costas Kiparissides, Department of Chemical Engineering, Aristotle University of Thessaloniki Brussels,

More information