Samsung To Announce The Samsung Galaxy S9 In February 2018

Posted by with No comments
Samsung To Announce The Samsung Galaxy S9 In February 2018
Samsung To Announce The Samsung Galaxy S9 In February 2018


The well known South Korea company Samsung is reportedly planning to announce its next flagship phone, which is The Samsung Galaxy S9, in the month of February 2018, according to a report from Bloomberg, with the phones launching as early as in the month of March.



Well, if true, it would be one of the earliest announcements for a new model of Samsung’s main Galaxy line, which have typically been announced in March and released in April in past years. As was rumored earlier, the Samsung S9 and the accompanying Samsung S9 Plus will be more of an iterative update than the drastic redesign of the Samsung S8, with VentureBeat’s Evan Blass claiming that the Samsung S9 devices will see faster processors in the form of the recently announced Snapdragon 845, along with updated cameras.



The Samsung S9 was rumored to make an early appearance at CES in January 2018, but later reports cast doubt on that occurring. However, a full February reveal would split the difference between the early CES showing and the usual March unveiling. So lets us keep our fingers crossed and see what happen next year come 2018.


California State Advises Its Citizen Against Keeping Phones In Their Pocket.

Posted by with No comments
California State Advises Its Citizen Against Keeping  Phones In Their Pocket.
California State Advises Its Citizen Against Keeping  Phones In Their Pocket.

Presently, the jury is still out deliberating on whether or not cell phone radiation is bad for you, but California's Department of Public Health is not taking any chances at all. The agency just issued an advisory that suggests its residents should take steps to limit their exposure to cellphones. The notice recommends avoiding phone use when unnecessary, particularly when the cell signal is likely to kick into overdrive (such as when you are in a weak coverage area or streaming video). It also advises keeping your handset away from your body California's Department of Public Health Director Dr. Karen Smith even suggests "not keeping your cellphone in your pocket."



Furthermore, the advisory follows the release of California's Department of Public Health findings from the year 2009, which were prompted by a lawsuit from University of California Berkeley professor Joel Moskowitz in his bid to explore possible links between cellphone use and increased risks of cancer. He believes that cellphone radiation poses a "major risk." Other agencies, such as Connecticut's own Department of Public Health, have put out similar recommendations.


Morealso, the CTIA wireless industry group, which has historically opposed attempts to raise public concerns over phone radiation, is not taking a definitive stance. In a statement, the CTIA said that health was "important" to its members and that people should "consult the experts."


It is indeed a bold move when some of the companies that dominate the cellphone landscape are based in the state of California. The big question is whether or not the advisory will make a difference. Without a definitive link between phone use and health issues, the statement may not carry much weight at all. And let us face it, telling people to stop using smartphones as they normally do (especially in California) is like telling them to stop breathing, it is surprising right? There would have to be a clear risk to make everyone give up devices that have quickly become staples of modern life.

Facebook Swiftly Fixed A Bug That Stops You From Blocking Mark Zuckerberg.

Posted by with No comments
Facebook Swiftly Fixed A Bug That Stops You From Blocking Mark Zuckerberg.
Facebook Swiftly Fixed A Bug That Stops You From Blocking Mark Zuckerberg.

Surprised! Did you know that you can now block Mark Zuckerberg (CEO of Facebook) on Facebook? What a day. Surprisingly, up until September of this year 2017, it was a little-known secret to all that anybody, including Mark Zuckerberg, could be blocked on the Facebook site. However after the company acknowledged this, it was finally discovered that high-frequency profile like the one owned and operated by the creator of Facebook could not be blocked due to an error that tripped up the feature if a page had been blocked by a large number of users in a short period of time.



You would receive an error message on the screen that says something like, “Block Error. Sorry, there was a problem blocking Mark Zuckerberg. Please try again." Well, this was not intentional, Facebook told BuzzFeed News when the publication first noticed the bug on the site that prevented its users from exercising blocking powers against the site’s CEO Mark Zuckerberg. Facebook said the fix was a technical challenge that may take some time to fix, and Facebook also confirmed to The Verge today that it quietly and swiftly rolled out the fix a few weeks later. It went largely unnoticed, well that was smart of them anyway.


The social media director at The Next Web, in the person of Matt Navarra, tweeted today about his revelation that Facebook’s blocking ability could be used against Mark Zuckerberg. The sheer thrill of removing Mark Zuckerberg's ability to dip into your News Feed is enough to put anyone on a power trip:

Matt Navarra Tweet Facebook Swiftly Fixed A Bug That Stops You From Blocking Mark Zuckerberg.
Matt Navarra Tweet
Facebook Swiftly Fixed A Bug That Stops You From Blocking Mark Zuckerberg.

Matt Navarra said, "I just blocked Zuckerberg too, and it feels okay". I would not say I am drunk with blocking powers, as I unblocked him immediately after pressing the block button. But yeah, it is nice to have this ability once more. I appreciate it, and the recognition from one of the world’s most powerful and influence-wielding corporations that some users may not want to get PR-scrubbed updates from its leader. Thanks, Facebook.

/

Correction To Note: A previous version of this article misstated when the ability to block Mark Zuckerberg (CEO of Facebook) became available. Facebook fixed the bug back in September 2017, not today or this week.

Voyager 1: NASA 40-Year-Old Satellite Responded To Signal 13 Billion Miles Into Outer Space (Interstellar Space).

Posted by with No comments
Voyager 1: NASA 40-Year-Old Satellite Responded To Signal 13 Billion Miles Into Outer Space (Interstellar Space).
Voyager 1: NASA 40-Year-Old Satellite Responded To Signal 13 Billion Miles Into Outer Space (Interstellar Space).
The Voyager 1, which was the only spacecraft sent into interstellar space, was first launched on September 5, in the year 1977 and it is indeed now the farthest man-made object from the planet Earth. However, just because the Voyager 1 is 40 years old and it is over 13 billion miles away from earth, does not mean it is not contactable. That is right, NASA was able to send a signal that far to reach the Voyager 1 and it responded.



As we all know, a spacecraft of that age is going to develop some technical issues and Voyager 1 is no exception this we know. The Voyager 1 altitude control thrusters had been wearing down gradually and these are what the vessel uses to point its antenna toward the planet Earth. This is indeed a bit of a problem because if the Voyager 1’s antenna is not facing us here on earth, then there is absolutely no way of contacting it at all.


Voyager 1: NASA 40-Year-Old Satellite In Space
Voyager 1: NASA 40-Year-Old Satellite In Space 



So NASA is becoming exceptionally good at squeezing every last bit of life out of the Voyager 1 hardware and in an effort to prolong the usage out of the Voyager 1, scientists at NASA’s Jet Propulsion Laboratory (JPL) discovered they were able to use another set of thrusters on the vessel to do this. If successful, The Voyager 1 could still be useful beyond the year 2020.


Voyager 1: NASA 40-Year-Old Satellite In Space Close To Saturn
Voyager 1: NASA 40-Year-Old Satellite In Space Close To Saturn



Furthermore, on Tuesday, November 28, NASA fired up the thrusters again, however, they had to wait for a total time of about 19 hours and 35 minutes for confirmation that their mission was a success, as that is how long it took for Voyager 1 to send the results over to Earth, a staggering 13 billion miles back to Earth. That confirmation arrived the following day to great jubilation from everyone involved in the operation.


Voyager 1: Some Mechanical Part Of Voyager 1 NASA 40-Year-Old Satellite In Space
Voyager 1: Some Mechanical Part Of Voyager 1 NASA 40-Year-Old Satellite In Space 



So as we have it “The Voyager 1 team got more excited each time with each milestone in the thruster test,” Todd Barber from Jet Propulsion Laboratory (JPL) said in a statement. “The mood was one of relief, joy, and incredulity after witnessing these well-rested thrusters pick up the baton as if no time had passed at all.”


Voyager 1: Infographics of the Voyager 1.
Voyager 1: Infographics of the Voyager 1.



How To Connect The DStv Zappa Decoders For Extra Views

Posted by with No comments
How To Connect The DStv Zappa Decoders For Extra Views
How To Connect The DStv Zappa Decoders For Extra Views
As we all know, one of the apparent downsides of the new DStv HD decoder is simply the removal of the Radio Frequency (RF) in for the extra view. I am so happy and glad to inform you that there is now a  free workaround for this ( have in mind that the initial investment is huge ).



But luckily, this tweak is on the other hand permanent. So without wasting much time, the New DStv decoder extra view configuration is easy to do, trust me. Just kindly follow the instructions provided below. However, before this, let me quickly run a recap of what DStv extra view means. The DStv extra view is a service designed to enable you to use one subscription on at least two DStv decoder. A good example, if you have a Dstv premium sub and you want to enjoy it both in your room and in the living room, the extra view is just the best and easy way to go.

How To Connect The DStv Zappa Decoders For Extra Views (DStv_Smart_LNBf Connection)
How To Connect The DStv Zappa Decoders For Extra Views
(DStv_Smart_LNBf Connection)

As you can see from the picture above, it was as simple as ABC with the previous HD decoder and on the explora. But the new HD decoder makes it tricky to do. In fact, you cannot do it out of the box. So that is why you will need to read this tutorial that I will provide to you if you do not already know how to go about this. Also, in addition, you will pay a service charge of N2,600 for the extra view. In other words, You need to know that DStv extra view is not free of charge.


New DStv Decoder Extra View Configuration
(The Requirements)

  • You will need two DStv Zappa decoders. You will use one as the primary and the other will be the secondary.
  • You will also need DStv Smart LNB(bought separately from DStv office).
  • You will also need a dish, a regular LNBf, and other dish components like the coaxial cable.
  • You will also need a professional satellite installer for this. However, you can give it a shot yourself if you have the knowledge about the installation.
  • Finally, you will need to pay for Dstv extra view subscription. This will go along with your chosen subscription

The DStv Decoder Extra View Configuration.

The DStv Decoder Extra View Configuration.
The DStv Decoder Extra View Configuration.
  • First, track DStv on 36e and get a very high signal strength.
  • Next, you then replace the regular LNB with the smart LNB.
  • Now, connect one cable to the uni-cable A and another to the uni-cable C. As shown in the diagram above.
Note: Connect the A to the primary decoder and connect the C to the secondary decoder.

Now ensure you enter the following settings provided below for both decoders

For the Primary Decoder Settings.

User band 1 Frequency…….. 1210
User band 2 Frequency…….. 1420
User band 1 index……….……. 0
User band 2 index……….……. 1

For the Secondary Decoder Settings.

User band 1 Frequency…….. 1680
User band 2 Frequency…….. 2040
User band 1 index……….……. 2
User band 2 index……….……. 3

Once you do this, you can now start enjoying your extra view.

Uterus Transplant Recipient In The US Gives Birth For The First Time.

Posted by with No comments
The Baby, Uterus Transplant Recipient In The US Gives Birth For The First Time.
The Baby,
Uterus Transplant Recipient In The US Gives Birth For The First Time.

A woman who is part of an ongoing uterus transplant clinical trial received a uterus transplant has given birth to a baby, which is actually the first in the US. The woman who is part of an ongoing uterine transplant clinical trial taking place at the Baylor University Medical Center at Dallas and, like the other women in the trial, she has a nonfunctioning or nonexistent uterus as the case may be. Her uterus was donated by another woman, by name Taylor Siler, who wanted to be able to give someone else the kind opportunity to have a child. However, the trial, which accepts both living donations, like Siler's, and donations from deceased individuals, will complete 10 good transplants. So far eight transplants have been completed already and while at least three of them have failed so far, a second trial participant is now pregnant following a successful transplant, and all fingers crossed they hope for a successful outcome.



While this is actually a first for the US, however, it is not the first ever recorded. A group of medical experts in Sweden achieved the very first post-transplant births, a total of eight, and the childbirth that just took place at Baylor University Medical Center at Dallas is the first to replicate that of the Swedish team's success.


The childbirth at Baylor University Medical Center at Dallas was indeed a great moment for everyone involved in the trial. "we do transplants all day long," Giuliano Testa, head of the clinical trial, said. "This is not the same thing. I totally underestimated what this type of transplant does for these women. What I have learned emotionally, I do not have the words to describe them." Gregory McKenna, whom is also a transplant surgeon at the Baylor University Medical Center at Dallas said, "Outside my own children, this is the most excited I have ever been about any baby being born. I just started to cry."



Nevertheless, once a uterus is transplanted, the recipient must wait to achieve menstruation, which if the transplant is successful, usually occurs around four weeks later. Then, to get pregnant, they must go through in-vitro fertilization since their uterus is not attached to their ovaries.


The Baylor University Medical Center team says that many more uterine transplants will need to be done before this can become an approved treatment, but these initial successes recorded are promising. "For the girl who is getting the infertility diagnosis now, it is not hopeless," Kristin Wallis, a uterine transplant nurse at Baylor University Medical Center at Dallas said. "There is hope."

Researchers Invented Ultrasound Needle For Internal Surgical Images.

Posted by with No comments
Researchers invented ultrasound needle for internal surgical images
Researchers Invented Ultrasound Needle For Internal Surgical Images.
As we may have it, minimally invasive surgeries are best preferred because they typically mean less scar tissue, shorter recovery time and a lower risk of contracting an infection as a result of the surgery. Nevertheless, invasive surgeries also have their downsides as well as the case may be. So, getting a good look at the tissue being targeted during a minimally invasive surgery can be quite a herculean task, and often surgeons are limited to using external ultrasound probes and imaging scans taken prior to surgery. But just lately, new research has been published and it presents a potential new option, which is actually one of its kind, it is an Optical Ultrasound Needle.



Furthermore, within the Optical Ultrasound Needle are two optical fibers. One of the optical fibers generates ultrasonic pulses by delivering brief flashes of light and the other Optical Ultrasound Needle detects the light that is reflected back by the tissues in the body. As you may be wondering, " the whole process happens extremely quickly, giving an unprecedented real-time view of soft tissue," Richard Colchester, who is an author of the study, said in a short statement. " Using inexpensive optical fibres, we have been able to achieve a high-resolution imaging using needle tips under 1 mm in size," said co-author by name Adrien Desjardins.


Nevertheless, so far, the researchers have tested the ultrasound needle during heart surgery in pigs and they hope to test it out in other clinical applications very soon that use minimally invasive techniques as well. They are also working earnestly towards using the technology in humans.

Medical doctors using the Optical Ultrasound Needle during surgery
Medical Doctors Using The Optical Ultrasound Needle During Surgery.

 

Should I Leave My Cell Phone Battery Plugged In Overnight | My Phone Not Charging

Posted by with No comments
Should I Leave My Cell Phone Battery Plugged In Overnight | My Phone Not Charging
Should I Leave My Cell Phone Battery Plugged In Overnight | My Phone Not Charging
Is plugging a smartphone overnight to charge a bad idea? This is a question most people may ask to get the correct answer.
The surprising thing is that most people have fallen victim of this act before now, just before bedtime, you plug your smartphone into its charger so that it can get a 100% charge while you sleep through the night. The idea of doing this is just to wake up the next day with a fully charged battery on your smartphone. Nevertheless, one might have heard that charging your smartphone overnight damages the battery and eats away at its charge capacity over time, so one turns to Google for fast answers.




So before I get into the nitty-gritty details of this post on "Should I Leave My Cell Phone Battery Plugged In Overnight | My Phone Not charging", I will give you the shortest possible answer first. Yes! you can leave your smartphone plugged in overnight. Well, the "Yes" does not mean that we should not keep to the good practice of charging our phones. Below are things we need to know to make your smartphone battery last longer.


Should I Leave My Cell Phone Battery Plugged In Overnight | My Phone Not Charging
Should I Leave My Cell Phone Battery Plugged In Overnight | My Phone Not Charging


Lithium Battery Vs Nickel Battery
We are aware that the majority of today’s technology runs on lithium-ion batteries. Before now, batteries were mainly made of nickel just like the Duracell and Energizer batteries you normally buy in stores. The Nickel-based batteries exhibited a tendency to have a cyclic memory. If the Nickel battery were not given full charges in between cycles, they might “forget” their full capacity and remember the point to which they were last charged as being the maximum capacity. Many of us have never used nickel-based batteries before in our mobile devices since the transition to lithium-ion had occurred by the early 2000's.



Lithium-based batteries do not suffer from the memory phenomenon exhibited by the nickel batteries. The Lithium batteries play a major part in the mobile phone revolution/production. For one good thing, the lithium batteries are able to hold a lot of power while remaining fairly compact in size, which allowed mobile phones to become increasingly more smaller and thinner. Also, lithium batteries have a much better lifespan and recharge fairly quickly. The only one set back of the Lithium battery is that it is temperature sensitive.

Should I Leave My Cell Phone Battery Plugged In Overnight | My Phone Not Charging
Heat: The Silent Battery Murderer


Heat: The Silent Battery Murderer
Now we get to the most significant threat to any lithium-ion or lithium-polymer battery, which is heat. Well, the funny thing is that, batteries dislike cold just about as much as they dislike heat, but the latter is more relevant when it comes to leaving your device plugged into its charger overnight to charge. charging temperature for lithium-based batteries i.e, the temperature at which the lithium battery is capable of receiving a charge is 32° to 113℉. While, lithium-based batteries can discharge at temperatures as low as -4℉. Nevertheless, Fast-chargingtechnologies work best at a warmer temperatures between 41° and 113℉, with no charge capable of occurring when the temperature is lower than 32℉.


However, there are a couple of important things that these temperature figures tell us. First, a lithium-based battery can discharge at temperatures far below freezing, so keeping the lithium battery in your kitchen freezer will not prevent them from self-discharging. Second, a lithium-ion battery warms up as it charges. As lithium battery gets warmer, it charges faster. But since a battery cannot hold more than its capacity, after reaching a full charge the battery expends the excess power by giving it off as heat. So overnight charging becomes a problem when a battery has no way to reroute the incoming current after reaching its full capacity.

Smartphones Use Battery Power Smartly
Batteries used in modern-day mobile devices are still mostly the same as they have been for almost two decades now, but the devices that these battery power have become much, much smarter as the case may be. Nowadays, we have less to worry about when it comes to the battery health because power optimization has been put on the shoulders of the software running these devices.
Thus, we get to the answer of our main question here: Should we leave our smartphone plugged in overnight? The answer to this question is a resounding sure, why not?



As we have clearly discussed above, the main danger encountered in leaving a smartphone plugged in overnight was allowing the battery of your device to get hot and remain hot through the rest of the night. Phones can stop charging when the battery has reached its maximum capacity and the phone begins to use the connected charger as its primary power source, allowing you to wake up to a fully-charged battery while your phone remains powered on through the night. It’s a pretty sweet deal right!

However, that is not to say that your charging habits cannot have an effect on the health and longevity of your phone battery. While you are not at risk of overheating your battery by leaving your phone plugged in overnight, I will still walk you through a number of tips that you can incorporate into your charging habits to keep your device’s battery in a good shape and condition.

Smartphone Battery Charging Best Practices
As we have it, the lithium-based battery is capable of a finite number of charge-and-discharge cycles. With each cycle, the capacity of the battery is very slightly reduced, so we want to avoid as many complete cycles as we can.

Try to keep your battery’s charge level between 40% and 80% power. Of course, this won’t always be possible, but try not to let your phone’s battery level get below 40% too often and keep the number of complete top-offs to a minimum.

Then try not to use fast charge every single time you charge your phone battery. Most rapid fast charge systems cause the battery to become very hot, which we now know is absolutely bad for your battery. If you are using the fast charge option every single time, the battery is getting excess heat more often than it should, resulting in a shorter lifespan of your battery.



Earlier in this discussion, we mentioned how lithium-ion batteries do not suffer from the same cyclic memory of nickel-based batteries. While that is true, the internal power meter in your smartphone i.e the part that determines the phone battery’s current power level can sometimes get thrown off. You can recalibrate by simply doing a full discharge-and-charge cycle: Use your phone until it dies. Once it is dead, then charge it to full capacity while leaving its power off. Finally, power your phone back on and make sure it reads as fully charged i.e 100%, if it does not, power off and continue charging. Repeat this process once a month or so to make sure your battery is functioning optimally as the case may be.


Conclusion
Finally, the battery is one of a smartphone’s most important component, after all, a smartphone with a dead battery is little more than a paperweight. So it goes without saying that we surely don’t want to do anything that would damage our batteries and make them less efficient. Although there are some who still believe it’s a bad idea to leave your phone plugged in overnight, all signs point to overnight charging being a completely valid way to make sure you start your day with a full charge on your smartphone.



So now, what do you think about overnight charging? Have you ever noticed a difference in the capacity of your device’s battery after charging overnight? Do you agree or disagree with our findings? Sound off in the comments below with your thoughts, I will be so happy to read them. Have a nice time with your smartphone 

Confirmed! Kenya To Issue African Travelers Visas On Arrival

Posted by with No comments
Confirmed! Kenya To Issue African Travelers Visas On Arrival
Confirmed! Kenya To Issue African Travelers Visas On Arrival

It has been confirmed that Africans visiting Kenya will no longer need to get a visa before traveling to the East African nation of Kenya, which is the latest country to join a continent-wide push to boost integration and free movement as the case may be.



Morealso, during his inauguration for a second term in office, the president Uhuru Kenyatta announced that Africans wishing to visit the Eastern African nation of Kenya will be eligible to receive a visa on their arrival. The directive, he said, was meant to basically enhance trade, security, and intercontinental travel. Kenya already had one of the more straightforward online processes for short-term visas which we all know.

So “ the free movement of people on our continent has always been a cornerstone of Pan-African brotherhood and fraternity, ” which we can all agree to that, Kenyatta said. “ The freer we are to travel and live with one another, the more integrated and appreciative of our diversity, we will become.”


President Uhuru Kenyatta
President Uhuru Kenyatta

The President, Uhuru Kenyatta made his announcement in front of African leaders from the following country; Djibouti, Ethiopia, Gabon, Nigeria, Rwanda, Somalia, Uganda, and Zambia, among others.



However, moving across Africa or anywhere else is tough for Africans themselves, who have the least powerful passports in the world, in fact, it is not an incident to shy from and we all know its true. Even North Americans and Europeans travel faster and with less restrictive visa processes within the continent than most Africans. But over the last couple of years, there is been a drive by the African Union and others to see that movement across the continent becomes easier for the indigens of Africa themselves.

Furthermore, the African Development Bank, which monitors visa openness of countries, says at least 21 countries have loosened or scrapped their visa rules altogether. Along with Seychelles, which has had no visa requirements for long now, Ghana, Rwanda, Mauritius, Nigeria, and Benin have all adopted this no visa policy over the last two years. The African Union (AU) also launched a continental passport last year as part of its move to encourage open borders.



Nevertheless, In his speech, President Uhuru Kenyatta went ahead and said that members of the East African Community (EAC) will be treated as Kenyans when they visit the country. They will be able to use their identity cards, and not passports, to do business and own property in Kenya. The EAC consists of six countries including Burundi, Kenya, Rwanda, South Sudan, Tanzania, and Uganda. President Uhuru Kenyatta said he did not expect any reciprocity from his other member states.

A SEMINAR REPORT ON UNICODE STANDARD

Posted by with No comments
UNICODE STANDARD

A SEMINAR REPORT

PRESENTED BY


######## ####### #####

CS/##/###

SUBMITTED TO THE
DEPARTMENT OF COMPUTER SCIENCE, FACULTY OF SCIENCE
MADONNA UNIVERSITY ELELE CAMPUS, RIVERS STATE

IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE AWARD OF BACHELOR OF SCIENCE (B.Sc.) DEGREE IN COMPUTER SCIENCE

         SUPERVISED BY
    ######## ####### #####

    
                                                                                                     

DECLARATION


This is to certify that this seminar work titled: Unicode standard was fully carried out by registration number CS/##/### in partial fulfillment of the requirement for the award of Bachelor of Science in Computer Science. This seminar research was done by the following stated above, and has not been submitted elsewhere for the award of a certification diploma or degree.
                                                                    
                                                                                                 ………………………………                                 ………………………………..
######## ####### #####       (CS/##/###)                                      DATE                                                                                                             
      (Student’s name)

                                                                                                                                           ………………………………..                                   ………………………………
######## ####### #####                                                                          
         (Supervisor)

…………………………………                                   ……………………………..
######## ####### #####                                                               DATE                                                                                          (H.O.D)

ACKNOWLEDGEMENT


I would like to thank the almighty God for giving me the courage to do this topic and to my supervisor ######## ####### ##### for his support and guidance throughout this research. I would also like to extend my heartfelt thanks to my parents ######## ####### #####. I would also like to thank ######## ####### ##### and to the HOD Computer science department MRS ######## ####### ##### and all the lectures who has one way or the other help me on the course of this research fellow colleagues and to all those who supported and encouraged me.














 


TABLE OF CONTENTS
DECLARATION.. 2
ACKNOWLEDGEMENT.. 3
ABSTRACT.. 7
CHAPTER 1. 8
INTRODUCTION.. 8
1.1 BACKGROUND OF THE STUDY.. 8
1.2 PROBLEM STATEMENT.. 9
1.3 OBJECTIVE OF THE STUDY.. 9
1.4 SCOPE OF THE REPORT.. 9
1.5 SIGNIFICANCE OF THE STUDY.. 10
1.6 LIMITATIONS.. 10
1.7 GLOSSARY.. 10
1.8 ORGANIZATION OF THE CHAPTER.. 10
1.8.1 Chapter 2 Literature Review.. 10
1.8.2 Chapter 3 Finding/Case Study. 10
1.8.3 Chapter 4 Conclusion.. 10
CHAPTER 2. 11
LITERATURE REVIEW... 11
HISTORY OF UNICODE STANDARD.. 11
HISTORICAL BACKGROUND.. 12
ORIGIN AND DEVELOPMENT.. 13
CHAPTER 3. 14
FINDINGS.. 14
3.1 COVERAGE OF UNICODE STANDARD.. 14
3.1.1 Languages covered by Unicode standard.. 14
3.1.2 Design Basis. 14
3.1.3 Text Handling.. 15
3.1.4 The Unicode Standard and ISO/IEC 10646. 15
3.1.5 The Unicode Consortium... 16
3.1.6 ASCII TEXT AND UNICODE TEXT.. 16
3.2 CHARACTER SEMANTICS.. 17
3.3 WHY UNICODE STANDARD WAS DESIGNED.. 17
3.4 WHEN TO USE UNICODE-MODE APPLICABLE.. 17
3.5 ADVANTAGES OF UNICODE STANDARD.. 18
3.6 DISADVANTAGES OF UNICODE STANDARD.. 18
3.7 BENEFITS OF UNICODE STANDARD.. 18
3.8 THE IMPORTANCE OF UNICODE STANDARD.. 19
3.9 PROBLEM OF UNICODE STANDARD.. 19
3.10 AVAILABILITY OF UNICODE STANDARD.. 19
3.11 DIFFERENCE BETWEEN ASCII AND UNICODE STANDARD KEYBOARD.. 20
CHAPTER 4. 21
CONCLUSION.. 21
4.1 SUMMARY.. 21
4.2 RECOMMENDATION.. 21
REFERENCE.. 23














ABSTRACT

The Unicode Standard is the universal character encoding scheme for writing characters and text. It defines a consistent way of encoding multilingual text that enables the exchange of text data internationally and creates the foundation for global software. The main objective of this research is that Unicode is a computing industry standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems and also for the classical forms of many languages which the unified Han subset contains 27,484 ideographic characters defined by national and industry standards of China, Japan, Korea, Taiwan, Vietnam, and Singapore. Unicode Standard goes far beyond ASCII's limited ability to encode only the upper and lowercase letters A through Z but Unicode have the capacity to encode all characters used for the written languages of the world and close more than 1 million characters can be encoded therefore making those in the listed countries stated above to make use of their different letter symbols to communicate with each other. Unicode standard provide basis of software that must function all around the world.















CHAPTER 1


INTRODUCTION


1.1 BACKGROUND OF THE STUDY

               The Unicode Standard is the universal character encoding scheme for written characters and text. It defines a consistent way of encoding multilingual text that enables the exchange of text data internationally and creates the foundation for global software. As the default encoding of HTML and XML, the Unicode Standard provides a sound underpinning for the World Wide Web and new methods of business in a networked world. Required in new Internet protocols and implemented in all modern operating systems and computer languages such as Java, Unicode is the basis of software that must function all around the world.
With Unicode, the information technology industry gains data stability instead of proliferating character sets; greater global interoperability and data interchange; and simplified software and reduced development costs.
While modeled on the ASCII character set, the Unicode Standard goes far beyond ASCII's limited ability to encode only the upper- and lowercase letters A through Z. It provides the capacity to encode all characters used for the written languages of the world--more than 1 million characters can be encoded. No escape sequence or control code is required to specify any character in any language. The Unicode character encoding treats alphabetic characters, ideographic characters, and symbols equivalently, which means they can be used in any mixture and with equal facility
The Unicode Standard specifies a numeric value and a name for each of its characters. In this respect, it is similar to other character encoding standards from ASCII onward. In addition to character codes and names, other information is crucial to ensure legible text: a character's case, directionality, and alphabetic properties must be well defined. The Unicode Standard defines this and other semantic information, and includes application data such as case mapping tables and mappings to the repertoires of international, national, and industry character sets. The Unicode Consortium provides this additional information to ensure consistency in the implementation and interchange of Unicode data.
Unicode provides for two encoding forms: a default 16-bit form and a byte-oriented form called UTF-8 that has been designed for ease of use with existing ASCII-based systems. The Unicode Standard, Version 3.0, is code-for-code identical with International Standard ISO/IEC 10646. Any implementation that is conformant to Unicode is therefore conformant to ISO/IEC 10646.
Using a 16-bit encoding means that code values are available for more than 65,000 characters. While this number is sufficient for coding the characters used in the major languages of the world, the Unicode Standard and ISO/IEC 10646 provide the UTF-16 extension mechanism (called surrogates in the Unicode Standard), which allows for the encoding of as many as 1 million additional characters without any use of escape codes. This capacity is sufficient for all known character encoding Unicode covers all the characters for all the writing systems of the world, modern and ancient. It also includes technical symbols, punctuations, and many other characters used in writing text.

1.2 PROBLEM STATEMENT

Unicode is a computing industry standard allowing computers to consistently represent and manipulate text expressed in most of the world's writing systems. Older fonts were designed before Unicode.
They placed special characters in non-standard "slots" in the encoding. You can tell this in InDesign's Glyphs panel, because when you pause over a special character, the tooltip description (based on Unicode) won't match what you're seeing.

1.3 OBJECTIVE OF THE STUDY

The main objective of this research is that Unicode is a computing industry standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems.

1.4 SCOPE OF THE REPORT

This work will cover the definition of Unicode standard, what it is used for and the advantages and disadvantages of using the standard. It will also list some languages covered by Unicode standard. How the languages assign or arrive at different coding system will not be part of this work.

1.5 SIGNIFICANCE OF THE STUDY

The essence of this topic is to enable my listeners to know that Unicode standard is based on inserting different languages and symbols attached to your keyboard of your computer system, which the symbols are different letters in our keyboard like A-Z that is each country have their own symbols for English letter.

1.6 LIMITATIONS

The main problem of his topic is that countries that their main language is not English language was having problem using the keyboard for typing and sending messages since their own type of letters were not in use example China , Cameroon , Russia etc. 

 1.7 GLOSSARY

·         UTF-Unicode Transformation Format
·         UTC-Unicode Technical Committee
·         IETF- Internet Engineering Task Force
·         UCS-Universal Character Set

1.8 ORGANIZATION OF THE CHAPTER

1.8.1 Chapter 2 Literature Review

This section is talking about some of the related literatures or work done by other researchers pertaining to this topic and how this literatures relate to my report. The study of Unicode standard have proven to be important to the research community.

1.8.2 Chapter 3 Finding/Case Study

My final chapter three is on my research findings. Where I discussed the results that I discovered in relation to Unicode standard. I discussed about the various parts of Unicode standard and how they affect and helps the society.

1.8.3 Chapter 4 Conclusion

This section contains the conclusion, summary and the recommendation of the technical report on Unicode standards


CHAPTER 2

LITERATURE REVIEW

HISTORY OF UNICODE STANDARD

       The Unicode standard document is the original Unicode manifesto, authored by Joe Becker. It developed the basic principles for the Unicode design and briefly outlined the history and status of the "Unicode Proposal" as of 1988. Some of its content was reworked for inclusion in the early Unicode pre-publication drafts.
The August, 1990 document was a very rough working draft for the standard. The editors and the Unicode Working Group (UWG) the predecessor of the UTC―used it to help lay out the anticipated content and structure of the standard. It contained no code charts or block descriptions. This draft preceded the final decision about the eventual name of the Unicode Consortium—hence the odd name of the consortium on the title page. The Unicode Consortium was not officially incorporated until January 3, 1991.
The October, 1990 draft was fairly widely distributed. It contained very abbreviated introductory text, initial drafts of the code charts, the first draft of the character names index, the first draft of multi-column Unihan charts, and various mapping tables. It was referred to at the time as "Unicode 0.9".
The December, 1990 drafts were even more widely distributed, including to many international reviewers. They were used to elicit public feedback on the draft standard. They put the participating SC2 national bodies on notice that the Unicode Consortium was serious about developing and publishing its standard. The first volume contained a more extended introduction, a full draft of the non-Han code charts, including the first version of the character names list. It also contained the first drafts of the block descriptions. The second volume contained the multi-column Unihan charts and the draft of the Han block introduction. At the time these draft volumes were referred to collectively as the "Final Review Draft".
The May 7, 1991 draft was complete content, intended for final Unicode Consortium membership review and approval and non-Han part of the draft standard prior to copy editing with the eventual publisher, Addison-Wesley.                

HISTORICAL BACKGROUND

         The origins of Unicode date to 1987, when Joe Becker from Xerox and Lee Collins and Mark Davis from Apple started investigating the practicalities of creating a universal character set. In August 1988, Joe Becker published a draft proposal for an "international/multilingual text character encoding system, tentatively called Unicode". He explained that "[t]he name 'Unicode' is intended to suggest a unique, unified, universal encoding.
Unicode is intended to address the need for a workable, reliable world text encoding. Unicode could be roughly described as "wide-body ASCII" that has been stretched to 16 bits to encompass the characters of all the world's living languages. In a properly engineered design, 16 bits per character are more than sufficient for this purpose.
His original 16-bit design was based on the assumption that only those scripts and characters in modern use would need to be encoded
Unicode gives higher priority to ensuring utility for the future than to preserving past antiquities. Unicode aims in the first instance at the characters published in modern text (e.g. in the union of all newspapers and magazines printed in the world in 1988), whose number is undoubtedly far below 214 = 16,384. Beyond those modern-use characters, all others may be defined to be obsolete or rare; these are better candidates for private-use registration than for congesting the public list of generally useful Unicode.
In early 1989,Unicode working group expanded to include Ken Whistler and Mike Kernaghan of Metaphor, Karen Smith-Yoshimura and Joan Aliprand of RLG, and Glenn Wright of Sun Microsystems, and in 1990 Michel Suignard and Asmus Freytag from Microsoft and Rick McGowan of NeXT joined the group. By the end of 1990, most of the work on mapping existing character encoding standards had been completed, and a final review draft of Unicode was ready.
The Unicode Consortium was incorporated on January 3, 1991, in California, and in October 1991, the first volume of the Unicode standard was published. The second volume, covering Han ideographs, was published in June 1992.
In 1996, a surrogate character mechanism was implemented in Unicode 2.0, so that Unicode was no longer restricted to 16 bits. This increased the Unicode code space to over a million code points, which allowed for the encoding of many historic scripts (e.g., Egyptian) and thousands of rarely used or obsolete characters that had not been anticipated as needing encoding. Among the characters not originally intended for Unicode are rarely used Kanji or Chinese characters, many of which are part of personal and place names, making them rarely used, but much more essential than envisioned in the original architecture of Unicode


ORIGIN AND DEVELOPMENT

Unicode has the explicit aim of transcending the limitations of traditional character encodings, such as those defined by the ISO 8859 standard, which find wide usage in various countries of the world but remain largely incompatible with each other. Many traditional character encodings share a common problem in that they allow bilingual computer processing (usually using Latin characters and the local script), but not multilingual computer processing (computer processing of arbitrary scripts mixed with each other).
Unicode, in intent, encodes the underlying characters—graphemes and grapheme-like units—rather than the variant glyphs (renderings) for such characters. In the case of Chinese characters, this sometimes leads to controversies over distinguishing the underlying character from its variant glyphs.
In text processing, Unicode takes the role of providing a unique code point—a number, not a glyph—for each character. In other words, Unicode represents a character in an abstract way and leaves the visual rendering (size, shape, font, or style) to other software, such as a web browser or word processor. This simple aim becomes complicated, however, because of concessions made by Unicode's designers in the hope of encouraging a more rapid adoption of Unicode.
The first 256 code points were made identical to the content of ISO-8859-1 so as to make it trivial to convert existing western text. Many essentially identical characters were encoded multiple times at different code points to preserve distinctions used by legacy encodings and therefore, allow conversion from those encodings to Unicode (and back) without losing any information. For example, the "full width forms" section of code points encompasses a full Latin alphabet that is separate from the main Latin alphabet section. In Chinese, Japanese, and Korean (CJK) fonts, these characters are rendered at the same width as CJK ideographs, rather than at half the width.


                        


CHAPTER 3

FINDINGS

3.1 COVERAGE OF UNICODE STANDARD

3.1.1 Languages covered by Unicode standard

The Unicode Standard, Version 3.0, contains 49,194 characters from the world's scripts. These characters are more than sufficient not only for modern communication, but also for the classical forms of many languages. Scripts include the European alphabetic scripts, Middle Eastern right-to-left scripts, and scripts of Asia. The unified Han subset contains 27,484 ideographic characters defined by national and industry standards of China, Japan, Korea, Taiwan, Vietnam, and Singapore. In addition, the Unicode Standard includes punctuation marks, mathematical symbols, technical symbols, geometric shapes, and dingbats.

3.1.2 Design Basis

The primary goal of the development effort for the Unicode Standard was to remedy two serious problems common to most multilingual computer programs. The first problem was the overloading of the font mechanism when encoding characters while the second major problem was the use of multiple, inconsistent character codes because of conflicting national and industry character standards. In Western European software environments, for example, one often finds confusion between the Windows Latin 1 code and ISO/IEC 8859-1. In software for East Asian ideographs, the same set of bytes used for ASCII may also be used as the second byte of a double-byte character. In these situations, software must be able to distinguish between ASCII and double-byte characters.
Universal, Efficient, and Unambiguous

Figure 1-2. Universal, Efficient, and Unambiguous

3.1.3 Text Handling

Computer text handling involves processing and encoding. When a word processor user types in text via a keyboard, the computer's system software receives a message that the user pressed a key combination for "T", which it encodes as U+0054. The word processor stores the number in memory and also passes it on to the display software responsible for putting the character on the screen. This display software, which may be a windows manager or part of the word processor itself, then uses the number as an index to find an image of a "T", which it draws on the monitor screen. The process continues as the user types in more characters.
 The Unicode Standard directly addresses only the encoding and semantics of text and not any other actions performed on the text. In the preceding scenario, the word processor might check the typist's input after it has been encoded to look for misspelled words, and then highlight any errors it finds. Alternatively, the word processor might insert line breaks when it counts a certain number of characters entered since the last line break. An important principle of the Unicode Standard is that the standard does not specify how to carry out these processes as long as the character encoding and decoding is performed properly and the character semantics are maintained.

3.1.4 The Unicode Standard and ISO/IEC 10646

The Unicode Standard is fully compatible with the international standard ISO/IEC 10646-1:2000, Information Technology--Universal Multiple-Octet Coded Character Set (UCS)--Part 1: Architecture and Basic Multilingual Plane, which is also known as the Universal Character Set (UCS). During 1991, the Unicode Consortium and the International Organization for Standardization (ISO) recognized that a single, universal character code was highly desirable. A formal convergence of the two standards was negotiated, and their repertoires were merged into a single character encoding in January 1992. Since then, close cooperation and formal liaison between the committees have ensured that all additions to either standard are coordinated and kept synchronized, so that the two standards maintain exactly the same character repertoire and encoding.

3.1.5 The Unicode Consortium

The Unicode Consortium was incorporated in January 1991, under the name Unicode, Inc., to promote the Unicode Standard as an international encoding system for information interchange, to aid in its implementation, and to maintain quality control over future revisions.
The Unicode Technical Committee (UTC) is the working group within the Consortium responsible for the creation, maintenance, and quality of the Unicode Standard. The UTC controls all technical input to the standard and makes associated content decisions. Full Members of the Consortium vote on UTC decisions. Associate and Specialist Members and Officers of the Unicode Consortium are nonvoting UTC participants. Other attendees may participate in UTC discussions at the discretion of the Chair, as the intent of the UTC is to act as an open forum for the free exchange of technical ideas.

3.1.6 ASCII TEXT AND UNICODE TEXT

It provides the capacity to encode all characters used for the written languages of the world--more than 1 million characters can be encoded. No escape sequence or control code is required to specify any character in any language. The Unicode character encoding treats alphabetic characters, ideographic characters, and symbols equivalently, which means they can be used in any mixture and with equal facility.

Wide ASCII

Figure 1-1. Wide ASCII

3.2 CHARACTER SEMANTICS

The Unicode standard includes an extensive database that specifies a large number of characters properties, including:
·         Name
·         Type (e.g., letter, digit, punctuation mark)
·         Decomposition
·         Case and case mappings (for cased letters)
·         Numeric value (for digits and numerals)
·         Combining class (for combining characters)

·         Directionality
·         Line-breaking behavior
·         Cursive joining behavior
·         For Chinese characters, mappings to various other standards and many other properties

3.3 WHY UNICODE STANDARD WAS DESIGNED


·         Universal-The repertoire must be large enough to encompass all characters that are likely to be used in general text interchange, including those in major international, national, and industry character sets.

·         Efficient-Plain text is simple to parse: software does not have to maintain state or look for special escape sequences, and character synchronization from any point in a character stream is quick and unambiguous.

·         Uniform. A fixed character code allows for efficient sorting, searching, display, and editing of text.

·         Unambiguous. Any given 16-bit value always represents the same character.

3.4 WHEN TO USE UNICODE-MODE APPLICABLE

Consider working with Unicode-mode applications only in the following situations:
  • You need to enable users with different languages to view, in their own languages and character sets, information from a common database.
For example, using alias tables in Japanese and German, users in Japan and Germany can view information about a common product set in their own languages.
  • You need to handle artifact names longer than non-Unicode-mode applications support.
For example, application and database names need to include more than eight characters, or you are working with a multi byte character set, and you need to handle more characters in artifact names.

3.5 ADVANTAGES OF UNICODE STANDARD

  • Unicode is a 16-bit system which can support many more characters than ASCII.
  • The first 128 characters are the same as the ASCII system making it compatable.
  • There are 6400 characters set aside for the user or software.
  • There are still characters which have not been defined yet, future-proofing the system.
  • It impact it has on the performance of the international economy
  • This enables corporations to manage the high demands of international markets by processing different writing systems at the same time.
·         Character based encoding.
·         Unicode values are governed by characters (vowels and consonants).
·         Can be ported on any platform and any OS.
·         Can be ported on hand held and mobile devices
·         Different scripts have different code page.
·         All Asian languages are supported along with all other languages.
·         Allows multiple languages in the same data.
                                                                   

3.6 DISADVANTAGES OF UNICODE STANDARD

  • Unicode files are very large because it takes 2 bytes to store each character

3.7 BENEFITS OF UNICODE STANDARD

Support for Unicode provides many benefits to application developers, including:
  • Global source and binary.
  • Support for mixed-script computing environments.
  • Improved cross-platform data interoperability through a common code set.
  • Space-efficient encoding scheme for data storage.
  • Reduced time-to-market for localized products.
  • Expanded market access.
  • For a translated, multi byte base implementation, you have experienced a “round-trip” problem, where two different bit values can map to the same character, which can occur in communications between multi byte operating systems and application programs.

3.8 THE IMPORTANCE OF UNICODE STANDARD

·         Unicode enables a single software product or a single website to be designed for multiple platforms, languages and countries (no need for re-engineering) which can lead to a significant reduction in cost over the use of legacy character sets.
·         Unicode data can be used through many different systems without data corruption.
·         Unicode represents a single encoding scheme for all languages and characters.
·         Unicode is a common point in the conversion between other character encoding schemes.
·         Unicode is the preferred encoding scheme used by XML-based tools and applications.

3.9 PROBLEM OF UNICODE STANDARD

                          
·         Overloading of the font mechanism when encoding characters fonts have often been indiscriminately mapped to the same set of byte.

·         The use of multiple, inconsistent character codes because of conflicting national and industry character standards


3.10 AVAILABILITY OF UNICODE STANDARD

·         UNICODE is not vendor specific
·         Backward compatible
·         Major database, OS, browser players support some form UNICODE encoding
·         Data Migration services will be provided free for e-governance developers
·         Currently  office documents such as .doc/.docx, .xls/xlsx, .txt can be converted to UNICODE
·         Soon database migration tools will also be made available.
                  

3.11 DIFFERENCE BETWEEN ASCII AND UNICODE STANDARD        KEYBOARD


ASCII REPRESENTATION
     
ASCII representation




UNICODE STANDARD REPRESENTATION
    
 

UNICODE standard representation










CHAPTER 4

 CONCLUSION

                               The objective of this report is talking about the study of Unicode standard. This ssstudy brings more insights on some of the most problems in Unicode standard which has two serious problems common to most multilingual computer programs.The main purpose of the study is to carry out these processes as long as the character encoding and decoding is performed properly and the character semantics are maintained. I believe that Unicode standard have helped countries that don’t understand English language to enable them make use of their systems to type and send messages to one another.

4.1 SUMMARY

                   Unicode standard is emphasizing about how universal character encoding scheme is written as character and text. It enable users with different language to view their own language and character set from a common database.it also enable single product or single website which is designed for multiple platform , languages and countries and it prefers encoding schemes use by XML-based tool and application.
                    Unicode standard is a superset of all character in widespread use today, which provides the capacity to encode all characters used for the written language of the world, which more than 1 million characters can be encoded and it also provides the capacity to encode characters used in writing languages in the world and treats alphabetic characters, ideographic characters and symbols equivalently which can be the mixture with equal facility.
                      Unicode standard contains 49,194 character from the world script which the unified subset contains 27,484 ideographic characters defined by the natural and industry standard of china, japan, Singapore etc.

4.2 RECOMMENDATION

            Unicode standard should be made available for developed countries that find it difficult to understand the main English language examples which are the Chinese, Ethiopia, Japanese, Korean etc. This countries making use of Unicode standard will enable them to make use of desktops/laptops and even mobile phone also, so that they can find it easy to send messages to each other.
          Unicode standard should also increase their versions ie adding more characters so that it can be more user friendly to the public.
                  









































REFERENCE


Armbruster, Carl Hubert (1908). Initia Amharica: An Introduction to Spoken Amharic. Cambridge, Cambridge University Press.
Addison-Wesley Professional. (2003). The Unicode Consortium, The Unicode Standard, Version 4.0
Addison-Wesley Professional. (2006). The Unicode Consortium, The Unicode Standard, Version 5.0, Fifth Edition,.
Bergsträsser, Gotthelf (1995) Introduction to the Semitic Languages: Text Specimens and Grammatical Sketches.

Bringhurst, Robert (1996) The Elements of Typographic Style.

Campbell, George L (1990) Compendium of the World’s Languages.

Clews, John (1988) Language Automation Worldwide: The Development of
Character Set Standards.

Comrie, Bernard (1981). The World’s Major Languages. Oxford, Oxford University
Press, 1987.

Comrie, Bernard (1981). The Languages of the Soviet Union. Cambridge, Cambridge University Press.
James Felici. (2002). Adobe Press; 1st edition, The Complete Manual of Typography
Jukka K. Korpela.(2006) . O'Reilly; 1st edition, Unicode Explained
Julie D. Allen. (2011) .The Unicode Consortium Mountain View, The Unicode Standard, Version 6.0, 120-160.
Tony Graham. (2000). M&T books, Unicode: A Primer
Wesley, A. (2000). The unicode standard version 3.0. The Unicode Consortium, 150-300.
Wesley Longman. (2000). The Unicode Consortium, Addison,The Unicode Standard, Version 3.0