Articles, Blogs, Books, Course Handouts, Curriculum, Research Papers, Presentations, Slides, Theses

## Saturday, December 19, 2015

## Saturday, June 27, 2015

### Artificial Intelligence - Mumbai University Syllabus and Related Knols

_______________________________________________________

## Free Course on Artificial Intelligence

Stanford University Engineering Free Online Course. 10th October 2011 to 18th December 2011

http://www.ai-class.com/

The course is now shifted to Udacity. Search Udacity website for courses on artificial intelligence.

Course: Introduction to Machine Learning - Free Course

https://www.udacity.com/course/intro-to-machine-learning--ud120

http://www.ai-class.com/

The course is now shifted to Udacity. Search Udacity website for courses on artificial intelligence.

Course: Introduction to Machine Learning - Free Course

https://www.udacity.com/course/intro-to-machine-learning--ud120

________________________________________________________

## Artificial Intelligence - Mumbai University Syllabus

Objective: This course will introduce the basic ideas and techniques underlying the

design of intelligent computer systems. Students will develop a basic understanding of

the building blocks of AI as presented in terms of intelligent agents. This course will

attempt to help students understand the main approaches to artificial intelligence such as

heuristic search, game search, logical inference, decision theory, planning, machine

learning, neural networks and natural language processing. Students will be able to

recognize problems that may be solved using artificial intelligence and implement

artificial intelligence algorithms for hands-on experience

1. Artificial Intelligence: Introduction to AI, History of AI, Emergence Of Intelligent

Agents

2. Intelligent Agents: PEAS Representation for an Agent, Agent Environments,

Concept of Rational Agent, Structure of Intelligent agents, Types of Agents.

3. Problem Solving: Solving problems by searching, Problem Formulation, Uninformed

Search Techniques- DFS, BFS, Iterative Deepening, Comparing Different

Techniques, Informed search methods – heuristic Functions, Hill Climbing,

Simulated Annealing, A*, Performance Evaluation.

4. Constrained Satisfaction Problems: Constraint Satisfaction Problems like, map

Coloring, Crypt Arithmetic, Backtracking for CSP, Local Search.

5. Adversarial Search: Games, Minimax Algorithm, Alpha Beta pruning.

6. Knowledge and Reasoning: A knowledge Based Agent, Introduction To Logic,

Propositional Logic, Reasoning in Propositional logic, First Order Logic: Syntax and

Semantics, Extensions and Notational Variation, Inference in First Order Logic,

Unification, Forward and backward chaining, Resolution.

7. Knowledge Engineering: Ontology, Categories and Objects, Mental Events and

Objects.

8. Planning: Planning problem, Planning with State Space Search, Partial Order

Planning, Hierarchical Planning, Conditional Planning.

9. Uncertain Knowledge and Reasoning: Uncertainty, Representing knowledge in an

Uncertain Domain, Overview of Probability Concepts, Belief Networks, Simple

Inference in Belief Networks

10. Learning: Learning from Observations, General Model of Learning Agents,

Inductive learning, learning Decision Trees, Introduction to neural networks,

Perceptrons, Multilayer feed forward network, Application of ANN, Reinforcement

learning: Passive & Active Reinforcement learning.

11. Agent Communication: Communication as action, Types of communicating agents,

A formal grammar for a subset of English

Text Book:

1. Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 2nd

Edition, Pearson Publication.

Reference Books:

1. George Lugar, “AI-Structures and Strategies for Complex Problem Solving”, 4/e,

2002, Pearson Educations

2. Robert J. Schalkolf, Artificial Inteilligence: an Engineering approach, McGraw Hill,

1990.

3. Patrick H. Winston, Artificial Intelligence, 3rd edition, Pearson.

4. Nils J. Nilsson, Principles of Artificial Intelligence, Narosa Publication.

5. Dan W. Patterson, Introduction to Artificial Intelligence and Expert System, PHI.

6. Efraim Turban Jay E.Aronson, "Decision Support Systems and Intelligent Systems”

PHI.

7. M. Tim Jones, Artificial Intelligence – A System Approach, Infinity Science Press -

Firewall Media.

8. Christopher Thornton and Benedict du Boulay, “Artificial Intelligence – Strategies,

Applications, and Models through Search, 2nd Edition, New Age International

Publications.

9. Elaine Rich, Kevin Knight, Artificial Intelligence, Tata McGraw Hill, 1999.

10. David W. Rolston, Principles of Artificial Intelligence and Expert System

Development, McGraw Hill, 1988.

Term Work:

Term work shall consist of at least 10 experiments covering all topics and one written

test.

Distribution of marks for term work shall be as follows:

17. Laboratory work (Experiments and Journal) 15 Marks

18. Test (at least one) 10 Marks

The final certification and acceptance of TW ensures the satisfactory Performance of

laboratory Work and Minimum Passing in the term work.

Suggested Experiment list: (Can be implemented in JAVA)

1. Problem Formulation Problems

2. Programs for Search

3. Constraint Satisfaction Programs

4. Game Playing Programs

5. Assignments on Resolution

6. Building a knowledge Base and Implementing Inference

7. Assignment on Planning and reinforcement Learning

8. Implementing Decision Tree Learner

9. Neural Network Implementation

10. Bayes’ Belief Network (can use Microsoft BBN tool)

11. Assignment on Agent Communication – Grammar Representation For Simple

Domains

Additional Books - Collection by Me (NRao)

Fundamentals of the New Artificial Intelligence: Neural, Evolutionary, Fuzzy and More

Toshinori Munakata

Springer Science & Business Media, Jan 1, 2008 - 272 pages

This significantly updated 2nd edition thoroughly covers the most essential & widely employed material pertaining to neural networks, genetic algorithms, fuzzy systems, rough sets, & chaos. The exposition reveals the core principles, concepts, & technologies in a concise & accessible, easy-to-understand manner, & as a result, prerequisites are minimal. Topics & features: Retains the well-received features of the first edition, yet clarifies & expands on the topic Features completely new material on simulated annealing, Boltzmann machines, & extended fuzzy if-then rules tables

https://books.google.co.in/books?id=lei-Zt8UGSQC

##

Video lectures by IIT Faculty

_______________

http://www.youtube.com/watch?v=eLbMPyrw4rw&feature=edu&list=PL6EE0CD02910E57B8

Lecture list and individual lecture links

http://www.youtube.com/course?list=PL6EE0CD02910E57B8&category=University/Science/Computer%20Science

_______________

Knols - Articles on Artificial Intelligence

Original knol - http://knol.google.com/k/narayana-rao/artificial-intelligence-mumbai/ 2utb2lsm2k7a/ 5734Updated 27 June 2015

First published on 19 March 2012

### Rough Sets - Theory and Applications - Collection of Articles and Books

2014

ISRN Applied Mathematics

Volume

**2014**(2014), Article ID 382738, 11 pages

http://dx.doi.org/10.1155/2014/382738

## A Hybrid Feature Selection Method Based on Rough Conditional Mutual Information and Naive Bayesian Classifier

Zilin Zeng,1,2 Hongjun Zhang,1 Rui Zhang,1 and Youliang Zhang11PLA University of Science & Technology, Nanjing 210007, China

2Nanchang Military Academy, Nanchang 330103, China

http://www.hindawi.com/journals/isrn/2014/382738/

Open Access Article

A Rough Hypercuboid Approach for Feature Selection in Approximation Spaces

Issue No.01 - Jan. (2014 vol.26)

pp: 16-29

Pradipta Maji , Machine Intell. Unit, Indian Stat. Inst., Kolkata, India

DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TKDE.2012.242

ABSTRACT

The selection of relevant and significant features is an important problem particularly for data sets with large number of features. In this regard, a new feature selection algorithm is presented based on a rough hypercuboid approach. It selects a set of features from a data set by maximizing the relevance, dependency, and significance of the selected features. By introducing the concept of the hypercuboid equivalence partition matrix, a novel representation of degree of dependency of sample categories on features is proposed to measure the relevance, dependency, and significance of features in approximation spaces. The equivalence partition matrix also offers an efficient way to calculate many more quantitative measures to describe the inexactness of approximate classification. Several quantitative indices are introduced based on the rough hypercuboid approach for evaluating the performance of the proposed method. The superiority of the proposed method over other feature selection methods, in terms of computational complexity and classification accuracy, is established extensively on various real-life data sets of different sizes and dimensions.

INDEX TERMS

Approximation methods, Rough sets, Data analysis, Uncertainty, Data mining, Redundancy,rough hypercuboid approach, Pattern recognition, data mining, feature selection, rough sets

CITATION

Pradipta Maji, "A Rough Hypercuboid Approach for Feature Selection in Approximation Spaces", IEEE Transactions on Knowledge & Data Engineering, vol.26, no. 1, pp. 16-29, Jan. 2014, doi:10.1109/TKDE.2012.242

2013

Economic Modeling Using Artificial Intelligence Methods

Front Cover

Tshilidzi Marwala

Springer Science & Business Media, Apr 2, 2013 - 261 pages

0 Reviews

Economic Modeling Using Artificial Intelligence Methods examines the application of artificial intelligence methods to model economic data. Traditionally, economic modeling has been modeled in the linear domain where the principles of superposition are valid. The application of artificial intelligence for economic modeling allows for a flexible multi-order non-linear modeling. In addition, game theory has largely been applied in economic modeling. However, the inherent limitation of game theory when dealing with many player games encourages the use of multi-agent systems for modeling economic phenomena.

The artificial intelligence techniques used to model economic data include:

multi-layer perceptron neural networks

radial basis functions

support vector machines

rough sets

genetic algorithm

particle swarm optimization

simulated annealing

multi-agent system

incremental learning

fuzzy networks

Signal processing techniques are explored to analyze economic data, and these techniques are the time domain methods, time-frequency domain methods and fractals dimension approaches. Interesting economic problems such as causality versus correlation, simulating the stock market, modeling and controling inflation, option pricing, modeling economic growth as well as portfolio optimization are examined. The relationship between economic dependency and interstate conflict is explored, and knowledge on how economics is useful to foster peace – and vice versa – is investigated. Economic Modeling Using Artificial Intelligence Methods deals with the issue of causality in the non-linear domain and applies the automatic relevance determination, the evidence framework, Bayesian approach and Granger causality to understand causality and correlation.

Economic Modeling Using Artificial Intelligence Methods makes an important contribution to the area of econometrics, and is a valuable source of reference for graduate students, researchers and financial practitioners.

https://books.google.co.in/books?id=hV9EAAAAQBAJ

2011

Advanced Artificial Intelligence

Zhongzhi Shi

World Scientific, 2011 - 613 pages

Artificial intelligence is a branch of computer science and a discipline in the study of machine intelligence, that is, developing intelligent machines or intelligent systems imitating, extending and augmenting human intelligence through artificial means and techniques to realize intelligent behavior.

Advanced Artificial Intelligence consists of 16 chapters. The content of the book is novel, reflects the research updates in this field, and especially summarizes the author's scientific efforts over many years. The book discusses the methods and key technology from theory, algorithm, system and applications related to artificial intelligence. This book can be regarded as a textbook for senior students or graduate students in the information field and related tertiary specialities. It is also suitable as a reference book for relevant scientific and technical personnel.

https://books.google.co.in/books?id=wNbMOoTuGU0C

2003

Rough Sets: Current and Future Developments

http://onlinelibrary.wiley.com/doi/10.1111/1468-0394.00248/pdf

## Tuesday, June 9, 2015

### Syllabus - First Year Computer Science and Engineering - SNDT University, Mumbai

#### Detailed syllabus and recommended books. Links to appropriate online articles and videos will be given. This blog will also contain number of articles and videos.

#### _________________________________________________________

#### Computer Science and Engineering - Knol Books - Catalogue

####
**Sub-Directories of Articles/Knols in the Area of Computer Science and Engineering**
Knol Sub-Directory - Computer Science, Engineering and Technology - Subjects
Knol Sub-Directory - New Knols - Computer Science, Engineering and Technology

**Sub-Directories of Articles/Knols in the Area of Computer Science and Engineering**

#### _______________________________________________________

First Year

Engineering Math's - I/ II

Applied Science - I/II

Engineering Drawing

Electronics Devices

Introduction to Mechanics and Thermodynamics

Communication Skills - I

Basic Electrical Engineering Programming in C

## Syllabus - May 2011

## Applied Mathematics - I

Semester: I Lect: 4Hr

Branch: ENC / CST / IT Credit: 04

Branch: ENC / CST / IT Credit: 04

1. MATRICES

Types of Matrices. Adjoint of a matrix, Inverse of a matrix. Elementary transformations. Rank of a matrix. Reduction to a normal form. Partitioning of matrices. System of homogeneous and non – homogeneous equations, their consistency and solutions. Linear dependence and independence of rows and columns of a matrix area in a real field. Eigen values and Eigen vectors. Cayley Hamilton theoram, Minimal Polynominal – Derogatory and non derogatory matrices. Applications in Engg.

2. DIFFERENTIAL EQUATION

Differential equation of 1st order and 1st degree, Linear – equations. Bernoulli’s equations. Differential equation exact differential equations – integrating factors. Differential equations of higher order. Differential operator D, Where f (D) y = X, {x = eax , sin(ax+b), Cos(ax+b), xm , eax f(x). Linear differential equations with constant and variable coefficients. (Cauchy Linear Equations and Legendre’s Liner equations). Simple applications (Where the differential equation is given). Applications in Engg.

3. DIFFERENTIAL CALCULUS

Successive differentiation, Leibnitz’s theorem ( without proof ) and applications, Rolle’s theorem, Lagrange’s and Cauchy’s Mean value theorem. Applications in Engg.

4. COMPLEX NUMBERS

Definition of complex numbers Cartesian, Polar and exponential form, De–Moiver’s theorem and roots of complex numbers. Hyperbolic functions Separation real and imaginary parts of circular & Hyperbolic functions. Logarithm of complex numbers. Applications in Engg.

Reference Books:

Reference Books:

1. P.N.Wartikar & J. N. Wartikar, Elements of Applied Mathematics, 1st edition, Pune Vidyarthi Griha Prakashan, 1995. (rs. 110/-)

2. B. S. Grewal, Higher Engineering Mathematics, 34th edition, Khanna Publishers, 1998. (Rs. 170).

3. Shanti Narayan, Matrices, 9th Edition, S. Chand, 1997. (Rs. 45/-)

4. Shanti Narayan, Differential Calculus, 14th Edition, S. Chand, 1996. (Rs. 60/-)

5. A. R. Vashishtha, Matrices, 27th Edition, Krishna Prakashan Mesdia(P) Ltd; 1996. (RS. 75/-)

6. Edwin Kreyszig, Advance Engg. Mathematics, 5th Edition, New Age International (P) Ltd; 1997. (Rs. 295/-)

2. B. S. Grewal, Higher Engineering Mathematics, 34th edition, Khanna Publishers, 1998. (Rs. 170).

3. Shanti Narayan, Matrices, 9th Edition, S. Chand, 1997. (Rs. 45/-)

4. Shanti Narayan, Differential Calculus, 14th Edition, S. Chand, 1996. (Rs. 60/-)

5. A. R. Vashishtha, Matrices, 27th Edition, Krishna Prakashan Mesdia(P) Ltd; 1996. (RS. 75/-)

6. Edwin Kreyszig, Advance Engg. Mathematics, 5th Edition, New Age International (P) Ltd; 1997. (Rs. 295/-)

## Applied Mathematics - II

Semester: II

Lect: 4 Hr

Branch:ENC/CST/IT Credit: 04

SECTION A

Partial Differentiation: Definition, differentiation of composite and implicit functions, Euler’s theorem on Homogeneous functions, total differentiation of composite functions using partial differentiation, errors and approximation, extreme values of functions of two variables, applications in engineering.

Vector Algebra And Vector Calcus: Product of three or more vectors, vector differentiation – rules and theorems on vector differentiation, scalar point functions and vector point function, gradient, divergent and curl and applications solenoidal and irrotational fields, scalar potential of irrotational vectors, applications in engineering.

Differentiation Under Integral Sign: Theorems on differentiation under integral sign (without proof), Applications in engineering.

SECTION B

Integral Calculus: Curve tracing (only standard curves) Rectification (only arc length), double Integrals – Change of order of integration, double integration of polar coordinates, application of single and double integration – mass and volume, triple integration, applications in engineering.

Error Functions – Beta And Gamma Functions: Error functions and its properties, simple problems based on it, beta and gamma functions, properties, relation between beta & gamma functions, duplication formula and problems based on it, applications in engineering.

References:

1. P.N.Wartikar & J. N. Wartikar, Elements of Applied Mathematics, 1st edition, Pune Vidyarthi Griha Prakashan, 1995. (rs. 110/-)

2. B. S. Grewal, Higher Engineering Mathematics, 34th edition, Khanna Publishers, 1998. (Rs. 170).

3. Shanti Narayan, Differential Calculus, 14th Edition, S. Chand, 1996. (Rs. 60/-)

4. Murry Spiega, Vector Analysis

5. Edwin Kreyszig, Advance Engg. Mathematics, 5th Edition, New Age

6. International (P) Ltd; 1997. (Rs. 295/-)

## Applied Science – I

Semester: II Lect: 4 Hr

Branch:ENC/CST/IT Credit: 04

Branch:ENC/CST/IT Credit: 04

Section – I (Physics)

Physics of Semiconductors

Introduction to band theory, metals, semiconductors and insulators; charge carriers in semiconductors; conductivity and mobility of charge carriers; concepts of fermi level; fermi level in Intrinsic and Entrinsic semiconductors; semiconductor junction diodes.

Introduction to Fiber Optic Communication

i] Propagation of light in an optical fiber; TIR, Angle of Acceptance; Numerical Aperture; Index Difference; Types of Fibers i) Step Index Fiber ii) Graded Index Fiber; Advantages of Optical Fiber, Applications of Optical Fiber Communication System.

ii] Optical Sources

Introduction to Lasers; Terms Associated with Lasers; Theory of Ruby Lasers; He-Ne Laser, LED, Semiconductor Lasers.

Introduction to Lasers; Terms Associated with Lasers; Theory of Ruby Lasers; He-Ne Laser, LED, Semiconductor Lasers.

iii] Photo Detectors

Minority Charge Carrier Injection, Photo Diodes – p-n, p-in Avalanche.

Minority Charge Carrier Injection, Photo Diodes – p-n, p-in Avalanche.

Ultrasonic

Characteristics of U. S. Waves, Magnetosrictive effect, Magnetosrictive Transducer, Piezoelectric effect, Piezo Quartz Crystal and transducer Applications of U. S. Waves – i) High power applications such as ultrasonic cleaners and cavitation ii) Low power applications such as Non Destructive Testing Methods – flaw detectors, Ultrasonic Thickness Gauges, Sonar’s etc.

Super Conductors

Properties Characterizing Superconductors; Implications of Zero resistivity, Critical temp-Tc, critical magnetic field – Hc, Critical current Ic, Meissner effect, Penetration depth;

Types of superconductors; London’s equation; B.C.S. Theory, Josephson’s Effect and junctions, SQUID, Applications of Superconductors.

Types of superconductors; London’s equation; B.C.S. Theory, Josephson’s Effect and junctions, SQUID, Applications of Superconductors.

Introduction to Electromagnetic

Laws of Physics such as Gauss’s Law, Ampere’s Circuital Law, Solenoidal vector B, Faraday – Lenz’s Law expressed in terms of Maxwell’s equations, Modified form of Ampere’s Law.

References:

1. R. K. Gaur, and S. L. Gupta, Engineering Physics, 7th Edition, Dhanpat Rai Publication Pvt. Ltd., 1997. (165/-)

2. B. L. Theraja, Modern Physics., S. Chand and Company Ltd., 1996. (Rs. 60/-)

3. S. G. Patgawkar, Applied Physics – I, 5th Edition, Technova Publication, 1999. (Rs. 75/-)

4. Arthur Beiser, Perspective of Modern Physics, McGraw Hill, 1997. (Rs. 400/-)

5. Charles Kittle, Solid State Physics, 7th Edition, John Wiley & Sons, 1996. (Rs. 254/-)

6. I. Wilson and J. F. B. Hawkes, Optoelectronics – An Introduction, 2nd Edition, PHI, 1999. (175/-)

1. R. K. Gaur, and S. L. Gupta, Engineering Physics, 7th Edition, Dhanpat Rai Publication Pvt. Ltd., 1997. (165/-)

2. B. L. Theraja, Modern Physics., S. Chand and Company Ltd., 1996. (Rs. 60/-)

3. S. G. Patgawkar, Applied Physics – I, 5th Edition, Technova Publication, 1999. (Rs. 75/-)

4. Arthur Beiser, Perspective of Modern Physics, McGraw Hill, 1997. (Rs. 400/-)

5. Charles Kittle, Solid State Physics, 7th Edition, John Wiley & Sons, 1996. (Rs. 254/-)

6. I. Wilson and J. F. B. Hawkes, Optoelectronics – An Introduction, 2nd Edition, PHI, 1999. (175/-)

Section – II (Chemistry)

7. Phase Rule

Phase Rule, Water System, Sulphur System, Phase Rule for two Component Alloy Systems, Eutectic System, Bismuth – Cadmium Eutectic System, Lead – Silver System – Simple Eutectic Formation.

8. Electrochemistry, Specific, Equivalent and Molar Conductance

Introduction, Kohlrausch’s Law of Independent Migration of Ions, Laws of Electrolysis, Transport Number, Conductometric Titration.

9. Spectroscopy

Electromagnetic radiation, Spectroscopy, Principle, Instrumentation and Applications of Microwave, IR and UV Visible Spectroscopy, Beer Lamber’s Law.

10. Atomic Structure & Atomic Properties

Rutherford’s Modes, Bohr’s Model, Aufbau’s Principle, Pauli’s Law, Hund’s Rule, Electronic Configuration Atomic Properties like Ionization potential electro negativity, electron affinity, Atomic size, oxidation potential.

References:

1. Glasstone Lewis, Physical Chemistry

2. C. N. Banwell, Fundamentals of Molecular Spectroscopy, 3rd Edition, Tata McGraw Hill, 1992. (Rs. 69/-)

3. Anand and Chatwal, Instrumental Methods of Analysis, Himalaya Publishing House, 1997. (Rs. 160/-)

2. C. N. Banwell, Fundamentals of Molecular Spectroscopy, 3rd Edition, Tata McGraw Hill, 1992. (Rs. 69/-)

3. Anand and Chatwal, Instrumental Methods of Analysis, Himalaya Publishing House, 1997. (Rs. 160/-)

## Communication Skills

Semester: I Lect: 4 Hr

Branch: ENC/ CST/ IT Credit: 04

Branch: ENC/ CST/ IT Credit: 04

SECTION A

1. Communication

The process, channels and media, Oral and written communication,

Verbal and non-verbal communication, Body language,

Barriers to communication , Developing communication through techniques.

The process, channels and media, Oral and written communication,

Verbal and non-verbal communication, Body language,

Barriers to communication , Developing communication through techniques.

2. Writing Skills

Vocabulary building- use of synonyms, antonyms, homonyms, homophones,word formation, confused set of words.

Writing effective paragraphs-through illustration, example, argument, analysis,description and comparison, expansion of key sentences.

Business correspondence-Principles of correspondence,Form,Formats,

Types of letters-Application with bio-data, enquiries, replies to enquiries, claims, adjustments, sales.

Vocabulary building- use of synonyms, antonyms, homonyms, homophones,word formation, confused set of words.

Writing effective paragraphs-through illustration, example, argument, analysis,description and comparison, expansion of key sentences.

Business correspondence-Principles of correspondence,Form,Formats,

Types of letters-Application with bio-data, enquiries, replies to enquiries, claims, adjustments, sales.

3. Summarising Techniques

One word substitutes( noun, verb,adverb, adjective)

Reduction of sentence length, Reduction of paragraph length,

Paraphrasing longer units.

One word substitutes( noun, verb,adverb, adjective)

Reduction of sentence length, Reduction of paragraph length,

Paraphrasing longer units.

SECTION B

4. Oral Communication Practice

Group discussion, Extempore speaking- introducing a speaker,

introducing a topic, vote of thanks, offering condolence, making

an announcement, speech on given topic, oral instructions.

Group discussion, Extempore speaking- introducing a speaker,

introducing a topic, vote of thanks, offering condolence, making

an announcement, speech on given topic, oral instructions.

5. Meeting Documentation

Notices, Circulars,Agendas,Minutes of meetings

Notices, Circulars,Agendas,Minutes of meetings

6. Report Writing

Basics-What is a report, Qualities of a good report,

Style of language in reports,Methods,Sequencing, Structures

Types of reports-analytical, feasibility, informative etc.

Non-formal short reports-letter reports, memorandum reports

Basics-What is a report, Qualities of a good report,

Style of language in reports,Methods,Sequencing, Structures

Types of reports-analytical, feasibility, informative etc.

Non-formal short reports-letter reports, memorandum reports

7. Descriptive Writing

Simple description of an object often used by engineering students

Writing instructions on using an object or performing a process

Simple description of an object often used by engineering students

Writing instructions on using an object or performing a process

Reference Books

1. Sushil Bahl, “Business Communication Today”, Response Books, 1996, Rs.125/-

2. Krishna Mohan, R.C. Sharma, “Business Correspondence and Report Writing”, 2nd ed., Tata McGraw Hill, 1997, Rs.110/-

3. Krishna Mohan, Meera Banerji, “Developing Communication Skills”, McMillan & India Ltd., 1997, Rs.88/-

4. E.H.Macgraw, “Basic Managerial Skills For All”, 4th ed., PHI, 1996, Rs.125/-

## Basic Electronics

Semester: II Lect: 4 Hr

Branch: ENC/ CST/ IT Credit: 04

Modeling devices: Static characteristics of ideal two terminal and three terminal devices, small signal models of non-linear devices.

Semiconductor diodes, construction and characteristics, Static and dynamic resistance, temperature effects, Avalanche and zener diodes. Small signal models of diodes; some applications of diodes. Specification of diodes, rectifiers ripple factor, rectification efficiency, regulation, and filters.

Bipolar junction transistor: Construction, characteristics. BJT as amplifier, CB, CE, CC configurations. Biasing circuits, dc analysis and stability factor, DC load line and ac load line.

Single stage transistor amplifiers (CB, CC, and CE). h-parameters, Small signal low frequency ac equivalent circuit, h parameter measurements

FET:- Construction, characteristics, amplifier. CS, CD and CG configurations. Biasing. Low frequency small signal ac equivalent circuit of JFET amplifiers.

Text Books / Reference Books

1. Boylstead & Nshelasky, “Electronic Devices & Circuit”, 6th edition, PHI. (Rs.295/-)

2. Milman Grabel, “Microelectronics”

3. V. K. Mehata, Principles of Electronics”, 7th edition. (Rs.210/-)

4. Bhargav Gupta, “Basic Electronics & Linear Circuit”. (Rs.120/-)

5. Kakani – Bhandari, “A Textbook of Electronics”.

Source: http://www.umit.ac.in/ Courses Page Accessed on 27.5.2011

__________________________________________________________________________

## 2009-10 First Semester Subjects

AS-1 Chemistry

AS-2 Physics

CS - Computer Skills

EC - Electrical Circuits

ED - Engineering Drawing

EM-1 Engineering Mathematics

EW - Electronics and Mechanical Workshop

#### Applied Mathematics - I

Semester: I Lect: 4Hr

Branch: ENC / CST / IT Credit: 04

Branch: ENC / CST / IT Credit: 04

1. MATRICES

Types of Matrices. Adjoint of a matrix, Inverse of a matrix. Elementary transformations. Rank of a matrix. Reduction to a normal form. Partitioning of matrices. System of homogeneous and non – homogeneous equations, their consistency and solutions. Linear dependence and independence of rows and columns of a matrix area in a real field. Eigen values and Eigen vectors. Cayley Hamilton theoram, Minimal Polynominal – Derogatory and non derogatory matrices. Applications in Engg.

2. DIFFERENTIAL EQUATION

Differential equation of 1st order and 1st degree, Linear – equations. Bernoulli’s equations. Differential equation exact differential equations – integrating factors. Differential equations of higher order. Differential operator D, Where f (D) y = X, {x = eax , sin(ax+b), Cos(ax+b), xm , eax f(x). Linear differential equations with constant and variable coefficients. (Cauchy Linear Equations and Legendre’s Liner equations). Simple applications (Where the differential equation is given). Applications in Engg.

3. DIFFERENTIAL CALCULUS

Successive differentiation, Leibnitz’s theorem ( without proof ) and applications, Rolle’s theorem, Lagrange’s and Cauchy’s Mean value theorem. Applications in Engg.

4. COMPLEX NUMBERS

Definition of complex numbers Cartesian, Polar and exponential form, De–Moiver’s theorem and roots of complex numbers. Hyperbolic functions Separation real and imaginary parts of circular & Hyperbolic functions. Logarithm of complex numbers. Applications in Engg.

**Reference Books:**

1. P.N.Wartikar & J. N. Wartikar, Elements of Applied Mathematics, 1st edition, Pune Vidyarthi Griha Prakashan, 1995. (rs. 110/-)

2. B. S. Grewal, Higher Engineering Mathematics, 34th edition, Khanna Publishers, 1998. (Rs. 170).

3. Shanti Narayan, Matrices, 9th Edition, S. Chand, 1997. (Rs. 45/-)

4. Shanti Narayan, Differential Calculus, 14th Edition, S. Chand, 1996. (Rs. 60/-)

5. A. R. Vashishtha, Matrices, 27th Edition, Krishna Prakashan Mesdia(P) Ltd; 1996. (RS. 75/-)

6. Edwin Kreyszig, Advance Engg. Mathematics, 5th Edition, New Age International (P) Ltd; 1997. (Rs. 295/-)

2. B. S. Grewal, Higher Engineering Mathematics, 34th edition, Khanna Publishers, 1998. (Rs. 170).

3. Shanti Narayan, Matrices, 9th Edition, S. Chand, 1997. (Rs. 45/-)

4. Shanti Narayan, Differential Calculus, 14th Edition, S. Chand, 1996. (Rs. 60/-)

5. A. R. Vashishtha, Matrices, 27th Edition, Krishna Prakashan Mesdia(P) Ltd; 1996. (RS. 75/-)

6. Edwin Kreyszig, Advance Engg. Mathematics, 5th Edition, New Age International (P) Ltd; 1997. (Rs. 295/-)

#### Applied Mathematics - II

Semester: II

Lect: 4 Hr

Branch:ENC/CST/IT Credit: 04

SECTION A

Partial Differentiation: Definition, differentiation of composite and implicit functions, Euler’s theorem on Homogeneous functions, total differentiation of composite functions using partial differentiation, errors and approximation, extreme values of functions of two variables, applications in engineering.

Vector Algebra And Vector Calcus: Product of three or more vectors, vector differentiation – rules and theorems on vector differentiation, scalar point functions and vector point function, gradient, divergent and curl and applications solenoidal and irrotational fields, scalar potential of irrotational vectors, applications in engineering.

Differentiation Under Integral Sign: Theorems on differentiation under integral sign (without proof), Applications in engineering.

SECTION B

Integral Calculus: Curve tracing (only standard curves) Rectification (only arc length), double Integrals – Change of order of integration, double integration of polar coordinates, application of single and double integration – mass and volume, triple integration, applications in engineering.

Error Functions – Beta And Gamma Functions: Error functions and its properties, simple problems based on it, beta and gamma functions, properties, relation between beta & gamma functions, duplication formula and problems based on it, applications in engineering.

**References:**

1. P.N.Wartikar & J. N. Wartikar, Elements of Applied Mathematics, 1st edition, Pune Vidyarthi Griha Prakashan, 1995. (rs. 110/-)

2. B. S. Grewal, Higher Engineering Mathematics, 34th edition, Khanna Publishers, 1998. (Rs. 170).

3. Shanti Narayan, Differential Calculus, 14th Edition, S. Chand, 1996. (Rs. 60/-)

4. Murry Spiega, Vector Analysis

5. Edwin Kreyszig, Advance Engg. Mathematics, 5th Edition, New Age

6. International (P) Ltd; 1997. (Rs. 295/-)

#### Applied Science – I

Semester: II Lect: 4 Hr

Branch:ENC/CST/IT

Branch:ENC/CST/IT

Credit: 04

Section – I (Physics)

Physics of Semiconductors

Introduction to band theory, metals, semiconductors and insulators; charge carriers in semiconductors; conductivity and mobility of charge carriers; concepts of fermi level; fermi level in Intrinsic and Entrinsic semiconductors; semiconductor junction diodes.

Introduction to Fiber Optic Communication

i] Propagation of light in an optical fiber; TIR, Angle of Acceptance; Numerical Aperture; Index Difference; Types of Fibers i) Step Index Fiber ii) Graded Index Fiber; Advantages of Optical Fiber, Applications of Optical Fiber Communication System.

ii] Optical Sources

Introduction to Lasers; Terms Associated with Lasers; Theory of Ruby Lasers; He-Ne Laser, LED, Semiconductor Lasers.

Introduction to Lasers; Terms Associated with Lasers; Theory of Ruby Lasers; He-Ne Laser, LED, Semiconductor Lasers.

iii] Photo Detectors

Minority Charge Carrier Injection, Photo Diodes – p-n, p-in Avalanche.

Minority Charge Carrier Injection, Photo Diodes – p-n, p-in Avalanche.

Ultrasonic

Characteristics of U. S. Waves, Magnetosrictive effect, Magnetosrictive Transducer, Piezoelectric effect, Piezo Quartz Crystal and transducer Applications of U. S. Waves – i) High power applications such as ultrasonic cleaners and cavitation ii) Low power applications such as Non Destructive Testing Methods – flaw detectors, Ultrasonic Thickness Gauges, Sonar’s etc.

Super Conductors

Properties Characterizing Superconductors; Implications of Zero resistivity, Critical temp-Tc, critical magnetic field – Hc, Critical current Ic, Meissner effect, Penetration depth;

Types of superconductors; London’s equation; B.C.S. Theory, Josephson’s Effect and junctions, SQUID, Applications of Superconductors.

Types of superconductors; London’s equation; B.C.S. Theory, Josephson’s Effect and junctions, SQUID, Applications of Superconductors.

Introduction to Electromagnetic

Laws of Physics such as Gauss’s Law, Ampere’s Circuital Law, Solenoidal vector B, Faraday – Lenz’s Law expressed in terms of Maxwell’s equations, Modified form of Ampere’s Law.

**References:**

**1. R. K. Gaur, and S. L. Gupta, Engineering Physics, 7th Edition, Dhanpat Rai Publication Pvt. Ltd., 1997. (165/-)**

2. B. L. Theraja, Modern Physics., S. Chand and Company Ltd., 1996. (Rs. 60/-)

3. S. G. Patgawkar, Applied Physics – I, 5th Edition, Technova Publication, 1999. (Rs. 75/-)

4. Arthur Beiser, Perspective of Modern Physics, McGraw Hill, 1997. (Rs. 400/-)

5. Charles Kittle, Solid State Physics, 7th Edition, John Wiley & Sons, 1996. (Rs. 254/-)

6. I. Wilson and J. F. B. Hawkes, Optoelectronics – An Introduction, 2nd Edition, PHI, 1999. (175/-)

#### Section – II (Chemistry)

7. Phase Rule

Phase Rule, Water System, Sulphur System, Phase Rule for two Component Alloy Systems, Eutectic System, Bismuth – Cadmium Eutectic System, Lead – Silver System – Simple Eutectic Formation.

8. Electrochemistry, Specific, Equivalent and Molar Conductance

Introduction, Kohlrausch’s Law of Independent Migration of Ions, Laws of Electrolysis, Transport Number, Conductometric Titration.

9. Spectroscopy

Electromagnetic radiation, Spectroscopy, Principle, Instrumentation and Applications of Microwave, IR and UV Visible Spectroscopy, Beer Lamber’s Law.

10. Atomic Structure & Atomic Properties

Rutherford’s Modes, Bohr’s Model, Aufbau’s Principle, Pauli’s Law, Hund’s Rule, Electronic Configuration Atomic Properties like Ionization potential electro negativity, electron affinity, Atomic size, oxidation potential.

**References:**

1. Glasstone Lewis, Physical Chemistry

2. C. N. Banwell, Fundamentals of Molecular Spectroscopy, 3rd Edition, Tata McGraw Hill, 1992. (Rs. 69/-)

3. Anand and Chatwal, Instrumental Methods of Analysis, Himalaya Publishing House, 1997. (Rs. 160/-)

#### Communication Skills

Semester: I Lect: 4 HrBranch: ENC/ CST/ IT Credit: 04

SECTION A

1. Communication

The process, channels and media, Oral and written communication,

Verbal and non-verbal communication, Body language,

Barriers to communication , Developing communication through techniques.

2. Writing Skills

Vocabulary building- use of synonyms, antonyms, homonyms, homophones,word formation, confused set of words.

Writing effective paragraphs-through illustration, example, argument, analysis,description and comparison, expansion of key sentences.

Business correspondence-Principles of correspondence,Form,Formats,

Types of letters-Application with bio-data, enquiries, replies to enquiries, claims, adjustments, sales.

3. Summarising Techniques

One word substitutes( noun, verb,adverb, adjective)

Reduction of sentence length, Reduction of paragraph length,

Paraphrasing longer units.

SECTION B

4. Oral Communication Practice

Group discussion, Extempore speaking- introducing a speaker,

introducing a topic, vote of thanks, offering condolence, making

an announcement, speech on given topic, oral instructions.

5. Meeting Documentation

Notices, Circulars,Agendas,Minutes of meetings

6. Report Writing

Basics-What is a report, Qualities of a good report,

Style of language in reports,Methods,Sequencing, Structures

Types of reports-analytical, feasibility, informative etc.

Non-formal short reports-letter reports, memorandum reports

7. Descriptive Writing

Simple description of an object often used by engineering students

Writing instructions on using an object or performing a process

**Reference Books**

1. Sushil Bahl, “Business Communication Today”, Response Books, 1996, Rs.125/-

2. Krishna Mohan, R.C. Sharma, “Business Correspondence and Report Writing”, 2nd ed., Tata McGraw Hill, 1997, Rs.110/-

3. Krishna Mohan, Meera Banerji, “Developing Communication Skills”, McMillan & India Ltd., 1997, Rs.88/-

4. E.H.Macgraw, “Basic Managerial Skills For All”, 4th ed., PHI, 1996, Rs.125/-

####

Basic Electronics

Semester: II Lect: 4 Hr

Branch: ENC/ CST/ IT Credit: 04

Modeling devices: Static characteristics of ideal two terminal and three terminal devices, small signal models of non-linear devices.

Semiconductor diodes, construction and characteristics, Static and dynamic resistance, temperature effects, Avalanche and zener diodes. Small signal models of diodes; some applications of diodes. Specification of diodes, rectifiers ripple factor, rectification efficiency, regulation, and filters.

Bipolar junction transistor: Construction, characteristics. BJT as amplifier, CB, CE, CC configurations. Biasing circuits, dc analysis and stability factor, DC load line and ac load line.

Single stage transistor amplifiers (CB, CC, and CE). h-parameters, Small signal low frequency ac equivalent circuit, h parameter measurements

FET:- Construction, characteristics, amplifier. CS, CD and CG configurations. Biasing. Low frequency small signal ac equivalent circuit of JFET amplifiers.

**Text Books / Reference Books**

1. Boylstead & Nshelasky, “Electronic Devices & Circuit”, 6th edition, PHI. (Rs.295/-)

2. Milman Grabel, “Microelectronics”

3. V. K. Mehata, Principles of Electronics”, 7th edition. (Rs.210/-)

4. Bhargav Gupta, “Basic Electronics & Linear Circuit”. (Rs.120/-)

5. Kakani – Bhandari, “A Textbook of Electronics”.

Updated 9 June 2015, 15 February 2012

## Friday, May 15, 2015

### Data Analytics and Data Mining - Difference Explained

Data analytics can be classified into three categories:

Descriptive analytics: Describes the collected data or dataset with clear visualization and summary.

Predictive analytics: Predict the future behavior of interest. Provides scenario analysis.

Prescriptive analytics: Makes or suggests smart decisions based on the predictive results. Optimization of solution based on the results of predictive analytics.

The three steps or categories of data analytics have to be used to make a decision based on data. To make data analytics valid or effective within a company in many different decisions, the company needs to involve at least three different people with different skills:

Business experts: Some of them set the problem objective and some provide the decision model that which is based on domain knowledge. The decision model indicates the data to be collected, the processes from which the data will be collected and the period for which data needs to be collected.

Information technology experts: They design the database which is likely to be filled during trasaction processing, and they also manage the database

Data analysis experts: They understand data mining, statistical and OR techniques.

Data analytics as explained is objective-oriented process that aims to make smart decisions. The goal is set first and data is analyzed to take the decision that helps in achieving the goal in efficient manner.

Data mining focuses on identifying undiscovered patterns and establishing hidden relationships embedded in the dataset. Data mining is a part of predictive analytics method.

Descriptive analytics: Describes the collected data or dataset with clear visualization and summary.

Predictive analytics: Predict the future behavior of interest. Provides scenario analysis.

Prescriptive analytics: Makes or suggests smart decisions based on the predictive results. Optimization of solution based on the results of predictive analytics.

The three steps or categories of data analytics have to be used to make a decision based on data. To make data analytics valid or effective within a company in many different decisions, the company needs to involve at least three different people with different skills:

Business experts: Some of them set the problem objective and some provide the decision model that which is based on domain knowledge. The decision model indicates the data to be collected, the processes from which the data will be collected and the period for which data needs to be collected.

Information technology experts: They design the database which is likely to be filled during trasaction processing, and they also manage the database

Data analysis experts: They understand data mining, statistical and OR techniques.

Data analytics as explained is objective-oriented process that aims to make smart decisions. The goal is set first and data is analyzed to take the decision that helps in achieving the goal in efficient manner.

Data mining focuses on identifying undiscovered patterns and establishing hidden relationships embedded in the dataset. Data mining is a part of predictive analytics method.

## Thursday, May 14, 2015

### Applications of Data Mining Techniques in Stock Selection

Bayesian Networks

Decision Trees

Fuzzy Sets

Neural Networks

Rough Sets

Fundamental Analysis of Stock Price Artificial Neural Network Model based on Rough Set Theory

Wei Wu, Jiuping Ju, World Journal of Modeling and Simulation, Vol. 2, (2006), No. 1, pp. 36-44

http://www.worldacademicunion.com/journal/1746-7233WJMS/WJMSvol2no1paper4.pdf

Article in Transactions in Rough Sets XVII March 2014

https://books.google.co.in/books?id=Mky7BQAAQBAJ

Rough Sets in Economics and Finance - Has a section on applications in portfolio selection.

### Data Mining Methods - Brief Introduction

A variety of Data Mining methods and algorithms exist. Some are well founded in mathematics and statistics, whereas others are used simply because they produce useful results. Some of them are:

Method - Foundation of the Method

Bayesian Network - Statistics

Decision Trees - Machine learning

Fuzzy Sets - Logic and Statistics

Inductive Logic Programming - Logic, Machine learning

Neural Networks - Black Box method

Rough Sets - Logic, Statistics, and Algebra

## Bayesian Networks

Bayesian, or belief networks, is a method to represent knowledge using directed acyclic graphs (DAG's). The vertices are events which describe the state of some part of the desired universe in some time interval. The edges represent relations between events, and the absence of edges represents independence between events.

A variable takes on values that correspond to events. Variables may be discrete or continuous. A database is denoted D in the following, the set of all variables U, and a Bayesian network-structure is denoted B. If D consists of all variables in U, then D is complete. Two structures are said to be equivalent if they have the same set of probability distributions.

The existing knowledge of an expert or set of experts is encoded into a Bayesian network, then a database is used to update the knowledge, and thus creates one or several new networks.

Theoretical Overview

In probability theory, P(A) denotes the probability that event A will occur. Belief measures in Bayesian formalism obeys the three basic axioms of probability theory:

The Bayes rule is the basis for Bayesian Networks:

Probability can be seen as a measure of belief in how probable outcomes are, given an event (subjective interpretation). It can also be interpreted as a frequency; how often it has happened in the past (objective interpretation).

Bayesian Networks Applied in Data Mining

When Bayesian Networks are applied in Data Mining, the user may want to apply techniques to learn the structure of a Bayesian network with one or more variables, or to reason on a structure that is already known.

Learning Structure and Variables

To learn a structure, we want to generate candidate Bayesian Networks, and choose the structure that is best according to the database we have.

The probability distribution in equation gif gives the probability distribution for the next observation after looking in the database:

To find a Bayesian Network B for a database D, we can use metrics together with a search algorithm. Some of the metrics are:

Bayes Factor (BF)

Bayesian Dirichlet (BD)

Maximum a Posteriori (MAP)

A Information Criterion (AIC)

Bayes Information Criterion (BIC)

Minimum Description Length (MDL)

Some search methods that exist are:

Local Search - a heuristic search algorithm on the graph.

Iterated Hill-Climb - uses Local Search, but avoids getting stuck in a local maxima.

If data are missing in the database, it can either be filled in using the posterior distribution. Or we can use Gibbs sampling, which approximates the expectation of any function with respect to it's probability distribution. Another alternative is to use the expectation-maximization (EM)-algorithm which can be used to compute a score from the BIC-metric.

Properties

Noise

Missing data can be filled in using Gibbs sampling or the posterior distribution. The method is relatively resistant to noise.

Consistency

Bayesian Networks are able to reason with uncertainty. Uncertainty in the data will be included in the model as probabilities of different outcomes.

Prior knowledge

In order to make the Bayesian network structure, it has to be learned from the database. It can also exist as prior knowledge.

Output

The output from Bayesian Network algorithms is a graph, which is easy to interpret for humans.

Complexity

There exist classifiers that perform in linear time, but further iterations give better results.

Retractability

The method is statistically founded, and is always possible to examine why the bayesian method produced certain results.

## Inductive Logic Programming

Theoretical Overview

Inductive Logic Programming (ILP) is a method for learning first-order definite clauses, or Horn clauses, from data.

ILP Applied in Data Mining

ILP systems can be divided in three groups :

Empirical systems, which are single-predicate learning systems that are able to analyze large example sets with little or no human interaction. Interactive systems incrementally build complex domain theories consisting of multiple predicates where the user controls and initiates the model construction and refinement process. Programming assistants learn from small example sets without interaction with humans. The systems can use a multitude of techniques when inducing knowledge. The techniques are discussed briefly below.

Inverse Resolution

Inverse Resolution is motivated by the fact that in deduction, results found by resolution are clauses. We invert this procedure to find clauses from results.

Least General Generalization

The method uses the relative least general generalization (described in section gif) to compute the most general clause given the background knowledge tex2html_wrap_inline1615 .

Search Refinement Graph

The top-down learner starts from the most general clause and repeatedly refine until it no longer covers negative examples. During the search the learner ensures that the clause considered covers at least one positive example.

Rule Modeling

All possible instantiations of the predicate variables are tried systematically for each rule model, and the resulting clauses are tested against the examples and background knowledge to produce new clauses that can be learned.

Transformation

An ILP problem is transformed from relational to attribute-value form and solved by an attribute-value learner. This approach is only feasible for a small number of ILP problems where the clauses are typed, constrained and non-recursive. The induced hypothesis is transformed back into relational form.

Properties

Noise

Early ILP systems did not handle noise, but recent systems such as FOIL and LINUS are able to cope with noise.

Consistency

General rules can not be generated from inconsistent data. Inconsistency must be removed by preprocessing the data.

Prior knowledge

The ILP methods require background knowledge on the form of definite clauses.

Output

The output is given as definite clauses, often in Prolog syntax.

Retractability

Inductive logic programming has a mathematical basis, and it is always possible to analyze why results are produced.

Overfit/Underfit

Overfitting can be a problem, if rules are generated that has low support.

## Rough Sets

The Rough Sets method was designed as a mathematical tool to deal with uncertainty in AI applications. Rough sets have also proven useful in Data Mining. The method was first introduced by Zdzislaw Pawlak in 1982.

Theoretical Overview

The starting point of Rough Sets theory is an information system, an ordered pair P = [U, A]. U is a non empty finite set called universe, A is a non empty finite set of attributes. The elements of the universe are called objects, and for each attribute a that belongs to A, a(x) denotes the value of attribute a for object x that belongs to U. It is often useful to illustrate an information system in a table, called an information system table. One of the most important concepts in Rough Sets is indiscernibility, denoted IND(). Indiscernability is a relation.

The indiscernible objects can be classified in equivalence classes:

Intuitively the objects of an equivalence class are indiscernible from all other objects in the class, with respect to the attributes in the set B. In most cases we therefore consider the equivalence classes and not the individual objects.

Some of the objects attributes may be less important than others when doing the classification. An attribute a is said to be dispersible or superfluous. Otherwise the attribute is indispensable in B. A minimal set of all the attributes that preserves the partitioning of the universe is called a reduct, RED. Thus a reduct is a set of attributes such that all attributes are dispensible and IND(B)=IND(A). There are also object-related reducts that gives the set of attributes needed to separate one particular object/class from all the other.

They are lower and upper approximation, Rough definability of sets and the Rough membership function are concepts of rough sets.. The lower and upper approximation are used to classify a set of objects, X, from the universe that may belong to different equivalence classes.

A natural extension of the information system is the decision system. A decision system is an information system with an extra set of attributes, called decision attributes,

A Decision table is a table of the indiscernability classes, the attributes and the decision attribute.

### Rough Sets Applied in Data Mining

When using rough sets as a algorithm for data mining, we generate decision rules that map the value of an objects attribute to a decision value. The rules are generated from a test set. There are two types of decision rules, definite and default decision rules. Definite decision rules are rules that map the attribute values of an object into exactly one decision class. A deterministic decision system can be completely expressed by a set of definite rules, but an indeterministic decision system can not.

Properties

The subsections below compare the Rough Set method to the criterias outlined in the above methods. This comparison will give us a framework that can be used to compare the various data mining techniques.

Noise

The rough set method is capable of handling most types of noise. If the input data is missing an attribute that is dispencible then the classification process is not affected. If the input data are missing important attributes, then the Rough Sets can not classify the object and the object must be thrown away. There are algorithms implemented that can 'guess' the value of a missing attribute, by doing statistical insertion among others, and thereby keeping the object.

Consistency

As described before, the default rules in Rough Sets can handle inconsistency in the information system quite nicely. In fact, as pointed out, it may be a advantage to introduce inconsistency, by removing attributes, to make the decision rules more general.

Prior Knowledge

Rough set method does not use prior knowledge when generating default rules. As mentioned before the default rules generated by Rough Set algorithms are generated only from the objects in the input data.

Output

The output from the Rough Set algorithms are rules, preferably default rules. These rules can then be uses to predict decisions based on the input attributes. By making variations in the threshold value, one can tune the number of default rules to get the most optional result. Rules are easy for humans to interpret and use.

Complexity

The complexity of partitioning is O(nlog(n)), a sort algorithm. Computing the reducts is NP-complete, but there are approximation algorithms (genetic, Johnson, heuristical search) that have proven very efficient.

Retractability

It is easy for the user to find out how the results were generated. This is because Rough Sets method can display the indiscernability classes it created and the distribution of the objects in these classes. The user can from this confirm the rules generated, and argument, in basis of the classes, why the rules are correct.

http://www.idi.ntnu.no/~dingsoyr/project/report.html

## Sunday, April 19, 2015

### Introduction to Computer Science - What are the Subjects of the Discipline Computer Science?

How to Think Like a Computer Scientists

Online Book Chapters

Chapter 1. The Way of the Program

MIT Program - 20 Lecture Course

http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-00-introduction-to-computer-science-and-programming-fall-2008/readings/

Download Full Course Material

http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-00-introduction-to-computer-science-and-programming-fall-2008/download-course-materials/

## Sunday, April 5, 2015

### Computer Science - Some Interview Questions

Basic Concepts of Important Subjects of Computer Science

What is an algorithm?

The method of solving a problem is known as algorithm.

More precisely, an algorithm is a sequence of instructions that act on some input data to produce some output in a finite number of steps.

An algorithm must have the following properties.

1. Input

2. Output

3. Finiteness

4. Definiteness

5. Effectiveness

What is analysis of algorithms?

An algorithm must not only be able to solve the problem at hand, it must be able to do so in as efficient a manner as possible. Determining which algorithm is efficient than the other involves analysis of algorithms.

Number of operations to be done is to be decided. This is done for significant operations. There are two classes of operations that are typically chosen for the significant operations - comparison and arithmetic.

In arithmetic operations, additive and muliplicative operations are separately counted as multiplication operation takes more time than the addition operation.

Cases to be considered during analysis - Best case, Worst case, Average case

Additional Issues of Analysis

Time efficiency

Space efficiency

Simplicity

Generality

Range of input that an algorithm takes

What are important data structures

1. Arrays 2. Strings 3. Linked lists 4. Sparse matrices 5. Stacks 6. Queues 7. Trees 8. Graphs

Searching and Sorting is an important operation on data structures.

Algorithmic Strategies

1. Naive - Brute force - Attempting to solve a problem without thinking of doing it efficiently.

2. Divide and conquer

3. Greedy method - Optimization at every stage.

4. Dynamic programming - Stage-wise optimization. From every point the optimal path to the goal is found starting with one stage ahead of the goal.

5. Backtracking - In backtracking, all possible alternatives to find a solution are not searched. Thus, the algoirthm gives a solution with less number of steps compared to an exhaustive search that finds all possible alternatives. The efficiency is obtained by eliminating certain paths early by determining that they are not feasible paths or efficient paths.

6. Branch and bound

7. Internet Algorithms - String and pattern matching algorithms - Rabin-Karp algorithm - String matching with finite automata - Boyer Moore algorithm - Knuth-Morris-Pratt algoirthm

A Database-management system (DBMS) is a collection of interrelated data and a set of programs to access those data.

The primary goal of a DBMS is to provide a way to store and retrieve database information that is both convenient and efficient.

Management of data involves both defining structures for storage of information and providing mechanisms for the manipulation of information.

Database Systems versus File Systems

File systems were used earlier. Data was stored in different individual files and was accessed as required by accessing required files. But the system had drawbacks which were overcome by database systems.

Drawbacks of file-based systems

Data redundancy and inconsistency

Difficulty in accessing data - Multiple files are to be accessed.

Data isolation

Integrity problems

Atomicity problem - Ability to restore all the data up to an instant

Concurrent access anomalies

Security

Instances and Schema

The collection of information stored in the database at a particular moment is called an instance of the database.

The overall design of the database is called the database schema.

Data Models

Underlying the structure of a database is the data model: a collection of conceptual tools for describing data, data relationships, data semantics, and consistency constraints.

Entity Relationship Model

The entity relationship (E-R) model is based on the idea that real world consists of a collection of basic objects, called entities and there are relationships among these objects or entities.

Relational Model

The relational model uses a collection of tables to represent both data and the relationships among those data. Each table has multiple columns, and each column has a unique name.

The relational model is a record based model. Relational database is structured in fixed-format records of several types. Each table contains records of a particular type. Each record type defines a fixed number of fields, or attributes. The columns of the table correspond to the attributes of the recod type,

Object Oriented Data Model

The object oriented model can be seen as extending the E-R model with notions of encapsulation, methods (functions) and object identity.

Database languages

Data definition language (DDL)

Data manipulation language (DML)

Database Access from Application Programs

Application programs of users interact with the database to retrieve data they want. Application programs are usually written in languages such as Cobol, C, C++, or Java.

Database System Structure

A database system is partitioned into modules. The two important modules are storage manager and the query processor module or component.

Storage Manager Components

Authorization and integrity manager

Transaction manager

File manager

Buffer manager

The storage manager implements several data structures as part of the physical system implementation

Data files (tables)

Data dictionary

Indices

The Query Processor Components

DDL Interpreter

DML Compiler

Query evaluation manager.

Application architecture

Two tier - Application at client level - database at server level

Three tier - client front end - application server - database server

Relational Database Design

First Normal Form

A domain is atomic if elements of the domain are considered to be indivisible units. We say that a relation schema R is in first normal form (1NF) if the domains of all attributes of R are atomic.

Boyce-Codd Normal Form (BCNF)

Third Normal Form

Functions of Computer

Data Processing

Data Storage

Data Movement

Control

Structure

There are four main structural components of a computer

1. CPU

2. Main Memory

3. I/O

4. System Interconnection: Some mechanism that provides for communication among CPU, main memory and I/O

Why an IT or Computer Science student has to know computer architecture and organization?

To select the most effective computer for use throughout the organization. The ability to make choice between cache sizes, and clock rates etc. is essential.

Many processor or computer components are part of embedded systems. The IT person must be able to debug them.

Generation

of Computer - Period - Technology - Typical Speed (Operations per second)

First

Second

Third

Fourth

Fifth

Sixth 1991 - Ultra large scale integration - one billion

Memory Hierarchy

Internal Memory

Registers

Cache

Main internal memory

Outboard Storage

Magnetic disc

CD ROM

Cd-RW

DVD-RW

DVD - RAM

Off-line storage

Magnetic tape

MO

WORM

Cache memory has high speed of data transfer like registers, but is less expensive than that of registers and cost is more closer to main memory.

Cache contains a copy of portions of main memory.

Computer Graphics

Raster scan displays

The beam intensity is turned on and off to create a pattern of illuminates spots that give a picture. Each screen point is referred to as pixel or pel (shortened forms of picture elements).

The raster system can have 2 bits for a pixel. But high quality systems have even 24 bits for a pixel.

Refreshing the picture takes place at the rate of 60 to 80 frames for second.

Graphics Functions

A general purpose graphics package provides users with a variety of functions for creating and manipulating pictures.

Output Primitives

Line Drawing Algorithms

DDA Algorithm

Bresenham's Line Algorithm

Parallel Line Algorithm

Circle Generating Algorithms

Midpoint Circle Algorithm

Ellipse Generating Algorithms

Midpoint Ellipse Algorithm

Software Engineering Practice

George Polya outlined the essenece of problem solving

1. Understand the problem

2. Plan a solution

3. Carry out the plan

4. Examine the result for accuracy

Core Principles

the dictionary defines the word principle as "an important underlying law or assumption required in a system of thought."

David Hooker has proposed seven core principles that focus on software engineering process as a whole.

1. The reason it all exists.

2. Keep it simple, stupid

3. Maintain the vision

4. What you produce, others will consume

5. Be open to the future

6. Plan ahead for reuse.

7. Think

Communication Principles that apply to customer communication

1. Listen

2. Prepare before you communicate

3. Someone should facilitate the activity

4. face to face communication is best.

5. Take notes and document decisions

6. Strive for collaboration

7. Stay focused, modularize your discussion.

8. If something is unclear, draw a picture

9. Once you agree to something move on; If you can't agree tosomething move on; If a feature or function is unclear and cannot be clarified at the moment, move on.

10. Negotiation is not a contest or a game. It works best when both parties win.

Principles of Planning

1. Understand the scope of the project

2. Involve the customer in the planning activity

3. Recognize that planning is iterative

4. Estimate based on what you know.

5. Consider risk as you define the plan

6. Be realistic

7. Adjust granularity as you define the plan

8. Define how you intend to ensure quality.

9. Describe how you intend to accommodate change

10. Track the plan frequently and make adjustments as required.

Analysis Modeling Principles

1. The information domain of a problem must be represented and understood.

2. The functions that the software performs must be defined.

3. The behavior of the software (as a consequence of external events) must be represented.

4. the models that depict information, function, and behavior must be partitioned in a manner that uncovers detail in a layered(or hierarchical) fashion.

5. The analysis task should move from essential information toward implementation detail.

Software Design Modeling Principles

1 Design should be traceable to the analysis model.

2. Always consider architecture of the system to be built.

3. Design of data is as important as design of processing function

4. Interfaces must be designed with care (both external and internal)

5. User interface design should be designed to the needs of the end user.

6. Component level design should be functionally independent.

7. Components should be loosely coupled to one another and to the external environment.

8. Design representations (models) should be easily understandable.

9. The design should be developed iternatively. With each iteration the designer should strive for greater simplicity.

Coding Principles and Concepts

Preparation Principles

1. Understand the problem you're trying to solve.

2. Understand the basic design principles and concepts.

3. Pick a programming language that meets the needs of the software to be built and the environment in which it will operate.

4. Select a programming environment that provides tools that will make your work easier.

5. Create a set of unit tests that will be applied once the component you code is completed.

Coding Principles

1. Constrain your algorithms by following structured programming (BOH00)

2. Select data structures that will meet the needs of the design.

Understand the software architecture and create interfaces that are consistent with it.

4. Keep conditional logic as simple as possible.

5. create nested loops in a way that makes them easily testable.

6. Select meaningful variable names and follow other local coding standards

7. Write code that is self-documenting.

8. Create visual layout (e.g., indentation and blank lines) that aids understanding.

Validation Principles

1. Conduct a code walkthrough when appropriate.

2. Perform unit tests and correct errors yoou've uncovered.

3. Refactor the code.

Software Testing Principles

Principles developed by Davis

1. All tests should be traceable to customer requirements.

2. tests should be planned long before testing begins.

3. The pareto principle applies to software testing.

4. testing should begin "in the small" and progress toward testing "in the large"

5. Exhaustive testing is not possible

Deployment Principles

1. Customer expectations for the software must be managed.

2. A complete delivery package must be assembled and tested.

3. A support regime must be established before the software is delivered.

4. Appropriate instructional materials must be provided to end users.

5. Buggy software should be fixed first, delivered later.

## Data Structures

What is an algorithm?

The method of solving a problem is known as algorithm.

More precisely, an algorithm is a sequence of instructions that act on some input data to produce some output in a finite number of steps.

An algorithm must have the following properties.

1. Input

2. Output

3. Finiteness

4. Definiteness

5. Effectiveness

What is analysis of algorithms?

An algorithm must not only be able to solve the problem at hand, it must be able to do so in as efficient a manner as possible. Determining which algorithm is efficient than the other involves analysis of algorithms.

Number of operations to be done is to be decided. This is done for significant operations. There are two classes of operations that are typically chosen for the significant operations - comparison and arithmetic.

In arithmetic operations, additive and muliplicative operations are separately counted as multiplication operation takes more time than the addition operation.

Cases to be considered during analysis - Best case, Worst case, Average case

Additional Issues of Analysis

Time efficiency

Space efficiency

Simplicity

Generality

Range of input that an algorithm takes

What are important data structures

1. Arrays 2. Strings 3. Linked lists 4. Sparse matrices 5. Stacks 6. Queues 7. Trees 8. Graphs

Searching and Sorting is an important operation on data structures.

## Design of Algorithms

Algorithmic Strategies

1. Naive - Brute force - Attempting to solve a problem without thinking of doing it efficiently.

2. Divide and conquer

3. Greedy method - Optimization at every stage.

4. Dynamic programming - Stage-wise optimization. From every point the optimal path to the goal is found starting with one stage ahead of the goal.

5. Backtracking - In backtracking, all possible alternatives to find a solution are not searched. Thus, the algoirthm gives a solution with less number of steps compared to an exhaustive search that finds all possible alternatives. The efficiency is obtained by eliminating certain paths early by determining that they are not feasible paths or efficient paths.

6. Branch and bound

7. Internet Algorithms - String and pattern matching algorithms - Rabin-Karp algorithm - String matching with finite automata - Boyer Moore algorithm - Knuth-Morris-Pratt algoirthm

## Database System Concepts

A Database-management system (DBMS) is a collection of interrelated data and a set of programs to access those data.

The primary goal of a DBMS is to provide a way to store and retrieve database information that is both convenient and efficient.

Management of data involves both defining structures for storage of information and providing mechanisms for the manipulation of information.

Database Systems versus File Systems

File systems were used earlier. Data was stored in different individual files and was accessed as required by accessing required files. But the system had drawbacks which were overcome by database systems.

Drawbacks of file-based systems

Data redundancy and inconsistency

Difficulty in accessing data - Multiple files are to be accessed.

Data isolation

Integrity problems

Atomicity problem - Ability to restore all the data up to an instant

Concurrent access anomalies

Security

Instances and Schema

The collection of information stored in the database at a particular moment is called an instance of the database.

The overall design of the database is called the database schema.

Data Models

Underlying the structure of a database is the data model: a collection of conceptual tools for describing data, data relationships, data semantics, and consistency constraints.

Entity Relationship Model

The entity relationship (E-R) model is based on the idea that real world consists of a collection of basic objects, called entities and there are relationships among these objects or entities.

Relational Model

The relational model uses a collection of tables to represent both data and the relationships among those data. Each table has multiple columns, and each column has a unique name.

The relational model is a record based model. Relational database is structured in fixed-format records of several types. Each table contains records of a particular type. Each record type defines a fixed number of fields, or attributes. The columns of the table correspond to the attributes of the recod type,

Object Oriented Data Model

The object oriented model can be seen as extending the E-R model with notions of encapsulation, methods (functions) and object identity.

Database languages

Data definition language (DDL)

Data manipulation language (DML)

Database Access from Application Programs

Application programs of users interact with the database to retrieve data they want. Application programs are usually written in languages such as Cobol, C, C++, or Java.

Database System Structure

A database system is partitioned into modules. The two important modules are storage manager and the query processor module or component.

Storage Manager Components

Authorization and integrity manager

Transaction manager

File manager

Buffer manager

The storage manager implements several data structures as part of the physical system implementation

Data files (tables)

Data dictionary

Indices

The Query Processor Components

DDL Interpreter

DML Compiler

Query evaluation manager.

Application architecture

Two tier - Application at client level - database at server level

Three tier - client front end - application server - database server

Relational Database Design

First Normal Form

A domain is atomic if elements of the domain are considered to be indivisible units. We say that a relation schema R is in first normal form (1NF) if the domains of all attributes of R are atomic.

Boyce-Codd Normal Form (BCNF)

Third Normal Form

## Computer Organization and Architecture

Functions of Computer

Data Processing

Data Storage

Data Movement

Control

Structure

There are four main structural components of a computer

1. CPU

2. Main Memory

3. I/O

4. System Interconnection: Some mechanism that provides for communication among CPU, main memory and I/O

Why an IT or Computer Science student has to know computer architecture and organization?

To select the most effective computer for use throughout the organization. The ability to make choice between cache sizes, and clock rates etc. is essential.

Many processor or computer components are part of embedded systems. The IT person must be able to debug them.

Generation

of Computer - Period - Technology - Typical Speed (Operations per second)

First

Second

Third

Fourth

Fifth

Sixth 1991 - Ultra large scale integration - one billion

Memory Hierarchy

Internal Memory

Registers

Cache

Main internal memory

Outboard Storage

Magnetic disc

CD ROM

Cd-RW

DVD-RW

DVD - RAM

Off-line storage

Magnetic tape

MO

WORM

Cache memory has high speed of data transfer like registers, but is less expensive than that of registers and cost is more closer to main memory.

Cache contains a copy of portions of main memory.

Computer Graphics

Raster scan displays

The beam intensity is turned on and off to create a pattern of illuminates spots that give a picture. Each screen point is referred to as pixel or pel (shortened forms of picture elements).

The raster system can have 2 bits for a pixel. But high quality systems have even 24 bits for a pixel.

Refreshing the picture takes place at the rate of 60 to 80 frames for second.

Graphics Functions

A general purpose graphics package provides users with a variety of functions for creating and manipulating pictures.

Output Primitives

Line Drawing Algorithms

DDA Algorithm

Bresenham's Line Algorithm

Parallel Line Algorithm

Circle Generating Algorithms

Midpoint Circle Algorithm

Ellipse Generating Algorithms

Midpoint Ellipse Algorithm

## Software Engineering

Based on Book of Roger Pressman, Sixth EditionSoftware Engineering Practice

George Polya outlined the essenece of problem solving

1. Understand the problem

2. Plan a solution

3. Carry out the plan

4. Examine the result for accuracy

Core Principles

the dictionary defines the word principle as "an important underlying law or assumption required in a system of thought."

David Hooker has proposed seven core principles that focus on software engineering process as a whole.

1. The reason it all exists.

2. Keep it simple, stupid

3. Maintain the vision

4. What you produce, others will consume

5. Be open to the future

6. Plan ahead for reuse.

7. Think

Communication Principles that apply to customer communication

1. Listen

2. Prepare before you communicate

3. Someone should facilitate the activity

4. face to face communication is best.

5. Take notes and document decisions

6. Strive for collaboration

7. Stay focused, modularize your discussion.

8. If something is unclear, draw a picture

9. Once you agree to something move on; If you can't agree tosomething move on; If a feature or function is unclear and cannot be clarified at the moment, move on.

10. Negotiation is not a contest or a game. It works best when both parties win.

Principles of Planning

1. Understand the scope of the project

2. Involve the customer in the planning activity

3. Recognize that planning is iterative

4. Estimate based on what you know.

5. Consider risk as you define the plan

6. Be realistic

7. Adjust granularity as you define the plan

8. Define how you intend to ensure quality.

9. Describe how you intend to accommodate change

10. Track the plan frequently and make adjustments as required.

Analysis Modeling Principles

1. The information domain of a problem must be represented and understood.

2. The functions that the software performs must be defined.

3. The behavior of the software (as a consequence of external events) must be represented.

4. the models that depict information, function, and behavior must be partitioned in a manner that uncovers detail in a layered(or hierarchical) fashion.

5. The analysis task should move from essential information toward implementation detail.

Software Design Modeling Principles

1 Design should be traceable to the analysis model.

2. Always consider architecture of the system to be built.

3. Design of data is as important as design of processing function

4. Interfaces must be designed with care (both external and internal)

5. User interface design should be designed to the needs of the end user.

6. Component level design should be functionally independent.

7. Components should be loosely coupled to one another and to the external environment.

8. Design representations (models) should be easily understandable.

9. The design should be developed iternatively. With each iteration the designer should strive for greater simplicity.

Coding Principles and Concepts

Preparation Principles

1. Understand the problem you're trying to solve.

2. Understand the basic design principles and concepts.

3. Pick a programming language that meets the needs of the software to be built and the environment in which it will operate.

4. Select a programming environment that provides tools that will make your work easier.

5. Create a set of unit tests that will be applied once the component you code is completed.

Coding Principles

1. Constrain your algorithms by following structured programming (BOH00)

2. Select data structures that will meet the needs of the design.

Understand the software architecture and create interfaces that are consistent with it.

4. Keep conditional logic as simple as possible.

5. create nested loops in a way that makes them easily testable.

6. Select meaningful variable names and follow other local coding standards

7. Write code that is self-documenting.

8. Create visual layout (e.g., indentation and blank lines) that aids understanding.

Validation Principles

1. Conduct a code walkthrough when appropriate.

2. Perform unit tests and correct errors yoou've uncovered.

3. Refactor the code.

Software Testing Principles

Principles developed by Davis

1. All tests should be traceable to customer requirements.

2. tests should be planned long before testing begins.

3. The pareto principle applies to software testing.

4. testing should begin "in the small" and progress toward testing "in the large"

5. Exhaustive testing is not possible

Deployment Principles

1. Customer expectations for the software must be managed.

2. A complete delivery package must be assembled and tested.

3. A support regime must be established before the software is delivered.

4. Appropriate instructional materials must be provided to end users.

5. Buggy software should be fixed first, delivered later.

Labels:
Course handouts,
Interview questions

## Sunday, March 15, 2015

### Information Technology Research and Development - Issues and Trends

Information Technology Research and Development: Critical Trends and Issues (Google eBook)

United States. Congress. Office of Technology Assessment, Unknown Author

Elsevier, Sep 24, 2013 - 360 pages

Information Technology Research and Development: Critical Trends and Issues is a report of the Office of Technology Assessment of the United States Government on the research and development in the area of information technology.

The report discusses information technology research and development - its goals, nature, issues, and strategies; environment and its changes; the roles of the participants; and the health of its field.

The book then goes on to four selected case studies in information technology: advanced computer architecture; fiber optic communications; software engineering; and artificial intelligence. The text also talks about the effects of divestiture and deregulation on research; education and human resources for research and development; foreign information technology research and development; and technology and industry.

The text is recommended for students and researchers of information technology who wish to know more about the state of research and development in this field and the applications of this research in different areas.

https://books.google.co.in/books?id=cbsgBQAAQBAJ

Goals for USA Federal R&D Policy for IT

Support National Defense

Provide for Social Needs

Promote Economic Growth

Advance Basic Understanding of the World

Enhance National Prestige

Support Civilian Agency Missions

Most areas of IT are still in the early stages as technologies. Further improvement depends upon more fundamental research and technical development.

Innovation is a process that includes research and development, manufacturing or production, and distribution.

Von Neumann architecture is serial architecture. Parallel processing architectures are a research area.

International Competitiveness in Electronics

https://books.google.co.in/books?id=PWHa1jDUSdQC

## Friday, March 6, 2015

### Big Data Application in Medical Practice

## Design and Development of a Medical Big Data Processing System Based on Hadoop

Qin Yao, Yu Tian, Peng-Fei Li, Li-Li Tian, Yang-Ming Qian, Jing-Song LiJournal of Medical Systems (Impact Factor: 1.37).

**03/2015**; 39(3):220. DOI: 10.1007/s10916-015-0220-8

## Saturday, January 3, 2015

## Friday, January 2, 2015

### Internet of Things - System Components

The IoT System

The IoT concept generally refers to applications working on the principle of distributed and remote collection of environmental data followed by limited local processing, then making the result available to the bigger processor via some sort of shared access to the Internet for further processing and aggregation.

Applications in different domains include:

• Personal area: Wearable devices provide data for processing in the smartphone or other personal equipment.

• Wide area: Sensors are distributed citywide for applications such as taxi availability. The data collected is used centrally for citywide processing and analysis. Inventory and transport tracking are examples of wide-area IoT applications.

• Local area: Data is processed for central home computers or office computers. In the case of homes, it is data from home appliances, energy & lighting devices,, and home heating and cooling equipment. In the case of factories and offices various equipment and devices are connected to the central computer.

In all these applications, data is collected locally through various sensors. Some processing and data reduction are also performed, and the resulting information is then transmitted Data from several sources can be aggregated and further processed in the central computer and the results are provided to the user.

The device that performs these functions (sensing, processing and transmission) on the distributed side can be built around a system-on-chip (SoC) for high-end applications or around a general-purpose MCU IC for low-end applications. Such device will have functions and components: battery/power management, embedded flash (e-flash) memory, user interfaces (I/F) and other I/O devices, the wireless or wired (RF transceiver, TRX) communications interface, and the mission-critical sensor I/F.

http://electronicdesign.com/analog/define-analog-sensor-interfaces-iot-socs

## Texas Instruments IoT Products

http://www.ti.com/ww/en/internet_of_things/iot-products.html### Microcontrollers

TI’s broad portfolio of microcontrollers allows our customers to innovate and create designs across a wide range of IoT applications, whether high performance or low-power. TI’s Performance MCUs consist of microcontrollers designed for closed-loop control IoT applications requiring real-time performance, connectivity, and safety functionality. The Low-power MCUs, which integrate a power management system with interrupt handling and SRAM/FRAM for real-time data capture make these devices extremely powerful at ultra-low power levels to preserve battery life in IoT applications. TI Performance and Low-Power MCUs have a scalable platform to support consumer, industrial, and HealthTech IoT applications today.

### Processors

Sitara processors, with scalable processing abilities, rich 3D graphics, robust peripherals and high-level OS support, can connect to protocols like power line communication, ZigBee, Ethernet, Wi-Fi, Z-wave and Bluetooth Low Energy - ideal for smart appliance applications.## Wireless connectivity

TI offers cloud-ready system solutions designed to access the IoT through the industry’s broadest portfolio of wireless connectivity technologies, including Wi-Fi®, Bluetooth® Smart, ZigBee®, 6LoWPAN, and Sub-1 GHz among others. Whatever your application, TI makes developing easier with the hardware, software, tools and support you need to connect to the IoT.The SimpleLink Wi-Fi CC3200 Internet-on-a-chip™ solution is a wireless MCU that integrates a high-performance ARM® Cortex®-M4 MCU with on-chip Wi-Fi, Internet and robust security protocols allowing customers to develop an entire application with a single IC.

## Sensing Products

http://www.ti.com/lsds/ti/analog/sensors/overview.page?DCMP=sensing-en&HQS=tlead-sensing-sva-sensing-vanity-lp-en

Capacitive sensing products

Capacitive sensing with grounded capacitors is a high-resolution, low-cost, contactless sensing technique that can be applied to a variety of applications. The sensor in a capacitive sensing system is any conductor, allowing for a low-cost and highly-flexible system design. The FDC1004 is a 4-channel capacitance-to-digital converter designed for capacitive sensing applications. It features more than 16-bit effective noise-free resolution and provides compensation of up to 100 pF offset capacitance to accommodate the use of remote sensors. The FDC1004 also includes two strong drivers for sensor shields to allow focusing of sensing direction and to reduce EMI interference.

Where this technology is used

The sensor in a capacitive sensing system is any metal or conductor, delivering a low-cost and highly-flexible system design. Capacitive sensing differs from capacitive touch in that it provides a higher resolution to allow for further sensing distance and higher-performance in sensing applications, including proximity, gesture, liquid level, and material properties.

Current sensing products

Current shunt monitors, or current sense amplifiers, are designed to monitor the current flow in a load by measuring the voltage drop across a resistor. They offer a unique input stage topology that allows the common mode voltage to exceed the supply voltage. Integrated precision gain resistors enable very accurate measurements.

Where this technology is used

Current shunt amplifiers enable a lower cost method of current measurement than indirect methods of sensing. TI's broad portfolio of current sense amplifiers enable a wide range of applications including power supply monitoring, motor/valve control, and battery management. They are recommended for currents under 100A and voltages under 100V.

Gas and chemical sensing products

Two common technologies to detect gas are electrochemical cells and NDIR sensors

Electrochemical sensors create a potential and measure the current across a cell that responds to a specific gas type

NDIR (non-dispersive infrared) sensors use infrared light to determine the amount of a specific gas in a container

pH sensing is used to monitor water quality by measuring the concentration of hydrogen ions in a solution.

Hall effect sensors

The Hall effect is a sensing technology that detects the presence and strength of a magnetic field. Hall effect sensors can measure the strength of the magnetic field as an indicator of distance or position without physical contact.

Where this technology is used

Hall effect sensors are commonly used to detect position, speed, or acceleration of an object by sensing the magnetic field generated by the object.

Humidity sensors

Humidity sensors determine the amount of water vapor / moisture in the air. Because relative humidity is a function of temperature, humidity sensors also usually include integrated temperature sensors.

Where this technology is used

This technology is used in many applications, including environmental monitoring in automobiles and buildings, HVAC, warranty monitoring, process control, fog/condensation sensing, and remote weather stations.

Inductive sensing products

Inductive sensing is a contactless sensing technology that can be used to measure the position, motion, or composition of a metal or conductive target as well as detect the compression, extension, or twist of a spring. Immunity to environmental interferers such as oil, water or dirt allows for sensing even in very harsh environments

Where this technology is used

TI’s inductance-to-digital converters (LDCs) enable customers to use their own custom coils as sensors. The LDC can be used to detect changes in Rp (parallel resonance impedance) and L (inductance) of the sensor; the choice of which value is used would depend on the application and system requirements.

Optical sensing products

Optical sensing is the conversion of light rays into electronic signals. Often the intensity of light or changes between one or more light beams is being measured.

Where this technology is used

In its simplest form, sensing light intensity is used for lighting controls in everything from tablets/phones to building automation and street lighting. Optical sensing is used in broad range of applications, and by monitoring additional characteristics (spectrum, phase, geometry, or timing), optical sensing enables advanced applications such as chemical analysis, 3D mapping, medical scanning, and pulse oximetry.

Pressure sensor signal conditioners

Pressure sensor signal conditioners deliver highly-precise and programmable solutions for accurately measuring pressure.

Where this technology is used

Measuring pressure precisely is critical in a number of industrial and commercial applications.

Temperature sensors

Temperature sensors leverage the highly-predictable and linear properties of a silicon PN junction to derive the temperature. Temperature sensors can guarantee high accuracy while requiring zero calibration in the end system. Temperature sensors offer a wide range of integration and multi-channel options to monitor external PN junctions such as diodes, transistors, processors, ASICs, and FPGAs.

Where this technology is used

Temperature sensors are often used as a replacement for thermistors for monitoring and protection, calibration, and control. Temperature sensors can provide greater linearity, lower power, guaranteed accuracy, high programmability, and built-in over-temperature detection and offer a wide range of analog and industry-standard interfaces.

Ultrasonic sensing products

Ultrasonic sensing is the measurement of the time between an ultrasonic signal being sent and received. The interval between the two signals is typically referred to as time of flight (ToF). The speed of an ultrasonic wave is sensitive to the transmission medium (flow speed, temperature & concentration / purity).

Where this technology is used

Distance to target either in gas or fluid

Level of fluid in a tank

Flow speed of a gas or liquid

Temperature and concentration of a liquid or a gas

### Power management

TI provides the broadest portfolio of innovative power management integrated circuits and easy-to-use design resources that allow designers to quickly develop Internet-ready applications that connect to the cloud and connect with each other.### Analog signal chain

In IoT, translating sensory inputs into information the system can act on requires precise, low-power, flexible analog signal processing. With TI’s comprehensive signal chain portfolio and integrated analog front ends, designers can optimize their systems for both power and performance.### Internet of Things - Energy Management Systems Applications

2014

Powerman - L&T Energy Management System - LAN/WAN system

http://www.larsentoubro.com/lntcorporate/LnT_Offerings/Product_Template1.aspx?res=P_EBG_COFF_SBU_PROD&pid=2406&sbu=13

May 2014

Enmetric Systems and Their Internet of Things (IoT) Platform Chosen as Energy Management Solution for First Net-Zero Energy Office Building in San Francisco

May 2014, Belmont, CA

https://www.enmetric.com/news/enmetric-systems-and-their-internet-things-iot-platform-chosen-energy-management-solution-first

http://www.marketwatch.com/story/enmetric-systems-and-their-internet-of-things-iot-platform-chosen-as-energy-management-solution-for-first-net-zero-energy-office-building-in-san-francisco-2014-05-21

Subscribe to:
Posts (Atom)