.

Monday, April 1, 2019

International Legal English Certificate Test Of Writing English Language Essay

supranationalist Legal face Certificate Test Of piece side of meat Language Essay1. IntroductionThis assignment evaluates the Test of Writing of the International Legal side Certificate (ILEC). ILEC is an question produced by Cambridge ESOL (English for Speakers of Other Languages) in collaboration with Trans advantageously-grounded, a firm of fairnessyer-linguists. The tar sound atomic number 50didature for ILEC is sub judice professionals and police force savants, operating(a) in the area of international mercantile policefulness, who need to inst each proof of their diction growth in English. The assignment will first-class honours degree consider applicable issues for the failment of outpourings for item manipulations and then examine daring aspects of the ILEC Writing paper in detail. (1)1.1 Tests in Language for Specific take aims examen Language for Specific Purposes (LSP), such as a Test of English for the legal profession refers to deli genuinely sagacity in which the sort national arises from an epitome of limited take aim verbiage accustom situations these often (but not always) understand to the language needs of a fact occupational group. devise LSP tallys presents hear developers with a count of issues, including the relationship of turn out speciality to quiz publicis strength the enormousness of ensuring genuineness of try out center the interaction amid range marrow friendship and language cognition, and for rounddomains, the difficulty in gaining opening to relevant in souration on the nature of language ingestion in that domain. (2)1.2 Specificity vs GeneralisabilityLSP footraces moderate often been directly contrasted with general consumption auditions. This is now, however, generally ac acquaintanced to be an oversimplification of the issue and in that location is growing consensus that tests do not happen upon into one grouping ( particular purpose) or the early(a) (general purpose), but that, in the row of Douglas (20001), there is a continuum of specificity from the very general to the very specific all tests are devised for some purpose and fall at some level off along the specificity spectrum. The construct of a spectrum or continuum of specificity raises the question of where on the continuum a test should be placed and the related issue of how generalisable the LSP test is think to be. Generalisability is often held to decrease in proportion to the specificity of the test the more specific a test (such as English for Air Traffic Controllers), the little possible to generalise from that to early(a) language intention situations. This is accepted as a fundamental issue in LSP, to which there are no straightforward answers. (3)1.3 Background satisfy associationIn general purpose language examen, background knowledge of topic or cultural content is viewed as a con prepareing variable, which should be minimised as it has the potential to three to measurement error. For LSP tests, however, subject specific content is arguably a shaping feature of the test. Nonetheless, the question of separability, that is, how to distinguish between language knowledge and specific background knowledge in analysing campaigners results on a specific purpose language test, has been a recurring meet. Bachman and Palmer (1996) argued in relation to a test for trainee doctors, that it should be possible to control for background medical knowledge in interpretation of performance on a language test, by, for subject, the administration of knowledge tests alongside the LSP test. The difficulty in assessing the extent of the test takers background knowledge and its interaction with language proficiency has been addressed by Clapham (1996) who concluded that background knowledge was undoubtedly a significant factor in the form of examen reading, but the extent varied with the specificity of the test and the language proficiency of the cand idate. There has more recentlybeen an acceptance that until more is known active how the mind deals cognitively with ability and knowledge, specific background knowledge and language performance need to be treated as creation inextricably linked (Douglas 200039). (4)1.4 Access to training on language use at heart the domainWith an increase, in the second part of the 20th century, in the number of large number needing to learn English for education, technology and commerce, the main drive throne the development of LSP was working rather than theoretical. As a result, LSP itself whitethorn be said to have suffered from a lack of theoretical at a lower placepinning. A name analytical tool has been the use of demand Analysis to assess the lingual exactments of a particular target group. Some analyses resulted in long precise lists of needs for which empirical verification was held to be lacking. Widdowson, for example, key outd many LSP postulate Analyses as being get to up of observational lists with no priming in theory (Widdowson 19838). Alderson, Davies and differents have raised similar concerns (Alderson 1988, Davies 1990, Skehan 1984). A foster criticism of some needs analyses was that they lacked objectivity, were influenced by the ideological perceptions of the analysts (Robinson 19917) and took insufficient government note of the students themselves. Nonetheless, opinion of language needs can still inform LSP hang and test design. As Clapham has said, We now know that such analyses can give out too detailed, and also paradoxically, too limited in scope. However, this does not slopped that they areunnecessary (Clapham 19965). Analysis of texts and spoken treatment from particular target language use situations is important in revealing how the target language use (TLU) residential area communicates and disseminates information. The growth of corpus linguistics and the corresponding development of electronic databases of texts can help in enabling the identification of specific syntactic patterns and use of specific lexis among particular occupational groups or discourse communities. At present, however, there is a limited number of such corpora operable and musical genre analysis plays an important role when considering communication between members of the occupational group or discourse community in question. According to Swales (1990), texts belonging to a particular genre share common features with regard to the memorial tablet of information, rhetorical conventions and lexico-grammatical patterns which practitioners inside that discourse community need to get to and use in outrank to operate with any degree of effectiveness. Bhatia (1993) developed earlier work by Swales and has extensively researched language use in professional circumstances, particularly discourse within business scopes. Nonetheless, due to the confidential nature of the work through by some occupational groups (such as lawyer s), access to texts from those domains may not be easily acquired. Swales (1996 cited in Flowerdew and Wan 2006) refers to such texts as occluded, genres to which access is usually denied to those outside the participating discourse community. One labor movement for the test developer in such circumstances therefore lies in obtaining subject-specific assistance and advice. Bhatia (1993) reports on how the subject specialist or specialist informant has played a role within LSP genre analysis. (5)2. The ILEC Writing Test considering the asperity issuesA copy of the ILEC Writing Test is attached in Appendix 1. The test will be evaluated according to its context, theory- found, scoring and consequential severity. (6)2.1 Context asperityThe term Content Validity was traditionally used to refer to the authenticity and content reporting of the projection. Context Validity is now a more widely used term as it also takes into account the discoursal, accessible and cultural contexts as well as the linguistic content. Context rigour in the case of composition labors also relates to the particular performance conditions under which the operations required for task fulfilment are performed such as purpose of task, measure available, length, contract addresses. (7)2.1.1 authenticity of task and content coverageAuthenticity of task means that the LSP test tasks should share critical features of tasks in the target language use situation of interest to the test takers (Douglas 20002).Bachman and Palmer (199623) get out a task as being relatively authentic whose characteristics correspond to those of the Target Language Use (TLU) domain tasks and define authenticity as the degree of parallelism of the characteristics of a given language test task to the features of a TLU task (199623). In cost of the TLU situation, ILEC is a test of English in an international, commercial law context, the design of which is based on the chase characteristics of the languag e environment of the target candidatesAreas of the law law of associations contract law sale of goods debtor-creditor law commercial paper avocation law mental property law property law remedies civil procedure administrative law public international law family law.Types of lawyer lawyers practising (and law students who limit to practise) in a commercial law context with elements of international commercial business dealings.Types of environments that target lawyers work in business law firms and other law firms with international dealings in-house embodied counsel governmental organisations international organisations.Types of people that target lawyers moldiness communicate with in English other international lawyers members of the international business community governmental representatives client form other countries.The choice of materials in the Writing Test is based on an analysis of the kinds of tasks that the target lawyers are likely to encounter in their working env ironment. (8)In a legal context, for example, a legal writing test must engage the test taker in writing tasks which are truly representative of the situations they might plausibly encounter. The technical characteristics of language employed in a legal professional context has very specific features that lawyers operating in the field of law must controlThere are lexical, semantic, syntactic, and even phonological characteristics of language peculiar to any field, and these characteristics allow for people in that field to speak and hold open more precisely al more or less aspects of the field that outsiders some eons find impenetrable (Douglas 20007).Interestingly, Douglas goes on to provide an example of legalise characterised by the arcane lexis, the convoluted syntax, the use of Latin terminology, and the eternal cross-references to previous laws and cases in legal texts (20008) as an example of the requirement for precise, specific purpose language. Clearly, such language has consciously evolved, developed by the legal fraternity enabling its members to dynamically engage with each other in an attempt to communicate effectively the exact meaning of the law. (9)A legal test also needs to identify and cover its relevant content domain. Coverage of the allot domains of language use is attained through the employment of relevant topics, tasks, text types and contexts. The domains, therefore, need to be specified with reference to the characteristics of the test taker, and to the characteristics of the relevant language use contexts. This is the case with the ILEC Writing paper. (10)2.1.2 Interactional and Situational AuthenticityAs a general principle it is now argued that language tests should as far as is practicable place the resembling requirements on test takers as knotted in sources responses to communicatory settings in non-test real- brio situations. The purpose for writing in this paradigm is essentially about communication rather than ver ity (Hyland 20028) emphasising validity, particularly the psychological reality of the task, rather than statistical reliability (ibid230).These views on writing reflect a concern with authenticity which has been a preponderating theme in recent years for adherents of a communicative exam approach as they attempt to develop tests that approximate to the reality of non-test language use (real life performance) (see Hawkey 2004, Morrow 1979, Weir 1993 and Weir 2003).The Real-Life (RL) approach (Bachman 199041) has proved useful as a means of guiding practical test development. It is most useful in situations in which the domain of language use is relatively homogeneous and acknowledgeable (see OSullivan 2006 on the development of Cambridge Business English examinations). Its primary limitation, however, is that it cannot provide very much information about language ability and hence cannot picture validity in the broadest sense.The RL approach has been regarded as encapsulating th e notion of communicative testing as it seeks to develop tests that mirror the reality of non-test language use (real life performance). Its prime concerns are the appearance or perception of the test and how this may effect test performance and test use (face validity) andthe accuracy with which test performance predicts non-test performance (predictive validity).A number of various attempts have been do to characterise communicative tests (Morrow 1979, Alderson 1981, Porter 1983). Weir (1988), however, points out, there are inherent problems exactd in basing test specifications on empirical research and observes thatthe more specific the tasks one identifies the less one can generalise from performance on its realisation in a test.The concern with situational authenticity requires writers to make use of texts, situational contexts, and tasks which simulate real-life without trying to replicate it exactly. The interactional authenticity (IA) approach is interested with the extent to which test performance reflects language abilities. In other words, the concern is with construct validity. Bachman (1989) summarises the IA approach arguing that it encapsulates the essential characteristics of communicative language use by reflecting the interactive relationship that exists between the language user, the context and the discourse. The major consideration shifts from that of attempting to sample authentic instances of non-test language use, to that of determining the most give up combination of test method characteristics. For Bachman, an interactionally authentic test involves the followingsome language function in addition to that of demonstrating the test takers language knowledgethe test takers language knowledgethe test takers language schematathe test takers meta-cognitive strategies. (11)2.1.3 Purpose of taskTask setting (such as Purpose, Response Format, Weighting, Known Criteria, monastic order of Items, Time Constraints) and Linguistic Demands (su ch as Channel, Discourse Mode, textual matter length, Writer-reader Relationship, etc.) are normally conveyed through the rubric/instructions supplied to the candidates. It is generally accepted that the presentation of information in the task rubric should be made as dictatorial as possible in cost of the production demands required of the test taker. (12)The writing task rubric must present candidates with clear, precise and unequivocal information regarding the purpose for completing the writing task and the target reference for it. This purpose should provide a reason for completing the task that goes beyond a ritual display of knowledge for assessment. It may well involve suspension of disbelief but having a clear and acceptable communicative purpose in mind is thought to enhance performance. The way the prompting is worded has been shown to affect what the candidate sees as the purpose of the task (Hamp Lyons 1991 and Moore and Morton 1999). For example a term like disc uss is open to different interpretations unless further specified (see Evans 1988). (13)The ILEC Writing test gives a clear role to the candidate in each task (eg You are a lawyer representing Ms Sandra Meyer.) and a clear purpose and target audience for the task (eg Write a letter to Robert Woodly on behalf of your client, Ms Meyer. Write a memorandum to your follower to brief him on the case.) (14)2.1.4 Time ConstraintsIn writing we are touch with the time available for task completion speed at which treat must take place length of time available to write whether it is an exam or hand in assignment, and the number of revisions/drafts allowed (process element). Outside of examination essays, in the real arena, writing tasks would not necessarily be quantify (although there is a case for speed writing in a working context on occasions especially in a legal or professional setting where deadlines must be met). Where time in the workplace is not of the essence, students would be allowed maximum opportunity and access to resources for demonstrating their writing abilities. However considerations such as time constraints and reliability issues make longer, processoriented tests impractical in most situations. (15)Weir (2004) points out that the texts we get candidates to produce obviously have to be long enough for them to be scored in a valid manner. If we want to establish whether a student can organize a written product into a crystalline whole, length is obviously a key factor. He notes that as regards an allot time for completion of productoriented writing tasks in an positive examination setting, Jacobs et al. (198119), in their research on the Michigan Composition Test, found that a time allowance of thirty minutes probably gave most students enough time to produce an adequate sample of their writing ability for the purpose of assessment. (16)One might reasonably expect that time-restricted test tasks cannot represent what writers are capable of in normal written discourse where time constraints may be less limited. Kroll (1990140-154) reports on research comparing clock classroom essays and essays written at home over a 10-14 solar day period. Contrary to what one might have expected the tuition indicated that, in general, time does not buy very much for students in any their control over syntax the distribution of specific language errors being remarkably similar in both or in their organisational skills. (17)In the case of ILEC, common tasks are presented to a candidature comprising both B2 and C1 candidates who must complete the test in 1 hour and 15 minutes. (18)2.1.5 Text LengthText length potentially has an important effect in terms of what Weir (2005) calls the executive resources that will be called into play in cognitive processing. These resources are both linguistic and experiential and need to be as similar as possible to those made by equivalent tasks in real life language use for use to generalise from test performance to language use in the domain of interest. ILEC Writing comprises dickens tasks, one of between 120 and 180 words and one of between 200 and 250 words. (19)2.2 theoretic validityTheory-based validity involves collecting evidence through the piloting and trialling of a test before it is made available to candidates on the cognitive processing spark off by the test tasks. (20)Theory-based validity of a test of writing is a function of how most it represents the cognitive processing involved in performing the construct in real life. Weir (2005) details how establishing theory-based validity for a writing task involves producing evidence on the nature of the executive resources and executive processing activated by the task. Executive resources involve linguistic resources and content knowledge. Content knowledge may already be possessed by the candidate or might be available in information supplied through task input. The Executive process refers to cognitive processin g and includes the procedures of goal setting, topic genre modifying, generating, organizing, translating and reviewing. (21)Planning relates to a number of stages in the writing process macro-planning organisation micro planning (Field 2004). Macro-planning entails assembling a set of ideas and drawing upon world knowledge. The writer initially establishes what the goal of the piece of writing is to be. This includes consideration of the target readership, of the genre of the text (earlier experience as a reader may assist) and of style (level of formality). Grabe and Kaplan (1996) refer to this stage as Goal Setting. Goal setting involves setting goals and purposes, offering an initial draft of task representation and connecting context with verbal working memory (1996 226). During the Organisation stage the writer provisionally organises the ideas, still in abstract form, a) in relation to the text as a whole and b) in relation to each other. The ideas are evaluated in terms of their relative importance, and terminations made as to their relative prominence in the text. The outcome may be a set of rough notes. Grabe and Kaplan (1996226) describe Organizing as grouping, categorizing ideas, establishing new concepts and putting ideas in suitable order. At the micro-planning level, the writer shifts to a different level and begins to plan conceptually at fourth dimension and paragraph level. Throughout this stage, constant reference back to two sets of criteria is made to decisions taken at earlier stages and to the manner in which the text has progressed so far. Account is taken of the overall goals of the text of the organisational plan and the direction in which the text is currently tending and of the content of the immediately preceding excoriate or paragraph. At this stage, the writer needs to give consideration to whether an person piece of information is or is not shared with the reader a) by virtue of shared world knowledge or b) as a result of e arlier mention in the text. These processing procedures are exposit in detail by Hayes and Flower (1980), Bereiter Scardamalia (1987), and Grabe Kaplan (1996). (22)ILEC Writing tasks require candidates to undertake writing tasks which engage these processing abilities. The Needs Analysis revealed that correspondence between legal firms and and clients is a written form of communication oft needed by professionals. Furthermore, correspondence is often in the form of a response to an earlier letter and includes reference both to this text and to other documents or texts, such as tax statements, procedural documents, company accounts. This reflects the concept of intertextuality as identified by Kristeva (198069) research by others (Flowerdew and Wan 2006) has affirm the prevalence of the interaction between texts in the corporate world. To reflect the findings of the ILEC Needs Analysis 9see Appendix 2), one task on the Test of Writing requires candidates to draw on a previous te xt and compose a response to it with the use of notes. Composing the response requires the candidate to use a range of functions including clarifying, refuting, requesting information, referring the target reader to other documentation. (23)2.3 make headway ValidityScoring Validity is linked directly to both context and theory-based validity and accounts for the extent to which test scores are based on appropriate criteria, exhibit consensual greement in their marking, are as free as possible from measurement error, stable over time, consistent in terms of their content sampling and engender confidence as reliable decision making indicators. (24)The assessment criteria for ILEC Writing (see Appendix 3 ) are based on those of a General English test at the resembling levels related to the CEFR. As Douglas points out contrary to the cases of LSP test content and method, LSP assessment criteria have not usually been derived from an analysis of the TLU situation (Douglas 2001174). In t he same article, he goes on to make a case for basing LSP assessment criteria on an empirical analysis of the TLU situation. It is also the case with ILEC, that examiners for both the ILEC Writing and utterance papers, are not required to have a background in Legal English*. It may be argued that this is a weakness in the underpinning scoring validity of the ILEC Writing paper as assessment by a subject specialist may differ from that of the layperson (ie general marker). (25)Jacobs et al. (19813) identify aspects of this relating to cognitive process and social interactionThe direct testing of writing emphasizes the communicative purpose of writing (it) utilizes the important intuitive, albeit subjective, resources of other participants in the communicative process the readers of written discourse, who must be the ultimate judge of the success or failure of the writers communicative efforts.If candidates self-assessments of their language abilities, or ratings of the candidate b y teachers, subject specialists, or other informants (Alderson et al 1995) differs from that of the non-specialist Examiner, predictive validity may be compromised. (26)2.4 Consequential ValidityMessick (198918) argues that For a fully incorporated view of validity, it must be recognised that the appropriateness, meaningfulness, and usefulness of score- based inferences depend* personal information from ILEC Writing subject staffas well on the social consequences of the testing. Therefore social values and social consequences cannot be ignored in considerations of validity. Consequential Validity relatesto the way in which the carrying into action of a test can affect the interpretability of testscores the practical consequences of the introduction of a test (McNamara 2000). Shohamy (199337) argues that Testers must begin to examine the consequences of the tests they develop often they do not find it necessary to observe the actual use of the test.Weir (2005) provides a compreh ensive treatment of these key elements within the Socio-Cognitive governance framework. (27)ILEC has achieved recognition by a number of different legal entities, including universities and law practices in 36 countries (see Appendix 4). Furthermore, the initial market research and viability study was administered to a number of stakeholders in the field including international and local law firms, large companies with their own legal departments university law faculties and legal training providers and language schools. Although the exam fee may be considered to be costly which is arguably an implication of the social consequences of testing, it may be argued that within the domain of corporate/commercial law, the consequential validity in this respect is not unsound. (28)3. polishThis assignment has examined the ILEC Test of Writing. The development of ILEC saw collaboration between assessment specialist and legal content specialists, with each bringing expertise to the process. This has arguably resulted in a test which authentically simulates the TLU situation and as a result, it may be concluded that the test is sound in terms of Context, Theory-based and Consequential validity. Where the test is arguably less strong is in the area of Scoring Validity (and the resulting impact the issue may be said to have on Consequential Validity), in the use of assessment criteria and examining personnel unrelated to the TLU and specific LSP domain. (29)Word Count 4, 125

No comments:

Post a Comment